We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Although much is known about psychopathology such as post-traumatic stress disorder (PTSD) and depression following bushfire (also known as wildfire), little is known about prevalence, trajectory and impacts for those experiencing general adjustment difficulties following exposure to these now-common events.
Aims
This was an exploratory analysis of a large cohort study that examined the prevalence, trajectory and risk factors of probable adjustment disorder over a 10-year period following bushfire exposure.
Method
The Beyond Bushfires study assessed individuals exposed to a large and deadly bushfire across three time points spanning 10 years. Self-report survey data from participants from areas with moderate and high levels of fire-affectedness were analysed: n = 802 participants at Wave 1 (3–4 years post-fires), n = 596 at Wave 2 (5 years post-fires) and n = 436 at Wave 3 (10 years post-fires). Surveys indexed fire-related experiences and post-fire stressors, and comprised the six-item Kessler Psychological Distress Scale (probable adjustment disorder index), four-item Posttraumatic Stress Disorder Checklist (probable fire-related PTSD) and nine-item Patient Health Questionnaire (probable major depressive episode).
Results
Prevalence of probable adjustment disorder was 16% (Wave 1), 15% (Wave 2) and 19% (Wave 3). Probable adjustment disorder at 3–4 years post-fires predicted a five-fold increase in risk for escalating to severe psychiatric disorder (i.e. probable fire-related PTSD/major depressive episode) at 10 years post-fires, and was associated with post-fire income and relationship stressors.
Conclusions
Adjustment difficulties are prevalent post-disaster, many of which are maintained and exacerbated over time, resulting in increased risk for later disorder and adaptation difficulties. Psychosocial interventions supporting survivors with adjustment difficulties may prevent progression to more severe disorder.
Evidence development for medical devices is often focused on satisfying regulatory requirements with the result that health professional and payer expectations may not be met, despite considerable investment in clinical trials. Early engagement with payers and health professionals could allow companies to understand these expectations and reflect them in clinical study design, increasing chances of positive coverage determination and adoption into clinical practice.
Methods
An example of early engagement through the EXCITE International model using an early technology review (ETR) is described which includes engagement with payers and health professionals to better inform companies to develop data that meet their expectations. ETR is based on an early evidence review, a framework of expectations that guides the process and identified gaps in evidence. The first fourteen ETRs were reviewed for examples of advice to companies that provided additional information from payers and health professionals that was thought likely to impact on downstream outcomes or strategic direction. Given that limitations were imposed by confidentiality, examples were genericized.
Results
Advice through early engagement can inform evidence development that coincides with expectations of payers and health professionals through a structured, objective, evidence-based approach. This could reduce the risk of business-related adverse outcomes such as failure to secure a positive coverage determination and/or acceptance by expert health professionals.
Conclusions
Early engagement with key stakeholders exemplified by the ETR approach offers an alternative to the current approach of focusing on regulatory expectations. This could reduce the time to reimbursement and clinical adoption and benefit patient outcomes and/or health system efficiencies.
As biological organisms, we age and, eventually, die. However, age’s deteriorating effects may not be universal. Some theoretical entities, due to their synthetic composition, could exist independently from aging—artificial general intelligence (AGI). With adequate resource access, an AGI could theoretically be ageless and would be, in some sense, immortal. Yet, this need not be inevitable. Designers could imbue AGIs with artificial mortality via an internal shut-off point. The question, though, is, should they? Should researchers curtail an AGI’s potentially endless lifespan by deliberately making it mortal? It is this question that this article explores. First, it considers what type of AGI is under discussion before outlining how such beings could be ageless. Then, after clarifying the type of immortality under discussion and arguing that imbuing an AGI with synthetic aging would be person-affecting, the article explores four core conundrums: (i) deliberately causing a morally significant being’s death; (ii) immortality’s associated harms; (iii) concerns about immortality’s unequal assignment; and (iv) the danger of immortal AGI overlords. The article concludes that while prudence requires we create an aging AGI, in the face of the material harm such an action would constitute, this is an insufficient reason to justify doing so.
Individuals living with severe mental illness can have significant emotional, physical and social challenges. Collaborative care combines clinical and organisational components.
Aims
We tested whether a primary care-based collaborative care model (PARTNERS) would improve quality of life for people with diagnoses of schizophrenia, bipolar disorder or other psychoses, compared with usual care.
Method
We conducted a general practice-based, cluster randomised controlled superiority trial. Practices were recruited from four English regions and allocated (1:1) to intervention or control. Individuals receiving limited input in secondary care or who were under primary care only were eligible. The 12-month PARTNERS intervention incorporated person-centred coaching support and liaison work. The primary outcome was quality of life as measured by the Manchester Short Assessment of Quality of Life (MANSA).
Results
We allocated 39 general practices, with 198 participants, to the PARTNERS intervention (20 practices, 116 participants) or control (19 practices, 82 participants). Primary outcome data were available for 99 (85.3%) intervention and 71 (86.6%) control participants. Mean change in overall MANSA score did not differ between the groups (intervention: 0.25, s.d. 0.73; control: 0.21, s.d. 0.86; estimated fully adjusted between-group difference 0.03, 95% CI −0.25 to 0.31; P = 0.819). Acute mental health episodes (safety outcome) included three crises in the intervention group and four in the control group.
Conclusions
There was no evidence of a difference in quality of life, as measured with the MANSA, between those receiving the PARTNERS intervention and usual care. Shifting care to primary care was not associated with increased adverse outcomes.
Pompe disease results from lysosomal acid α-glucosidase deficiency, which leads to cardiomyopathy in all infantile-onset and occasional late-onset patients. Cardiac assessment is important for its diagnosis and management. This article presents unpublished cardiac findings, concomitant medications, and cardiac efficacy and safety outcomes from the ADVANCE study; trajectories of patients with abnormal left ventricular mass z score at enrolment; and post hoc analyses of on-treatment left ventricular mass and systolic blood pressure z scores by disease phenotype, GAA genotype, and “fraction of life” (defined as the fraction of life on pre-study 160 L production-scale alglucosidase alfa). ADVANCE evaluated 52 weeks’ treatment with 4000 L production-scale alglucosidase alfa in ≥1-year-old United States of America patients with Pompe disease previously receiving 160 L production-scale alglucosidase alfa. M-mode echocardiography and 12-lead electrocardiography were performed at enrolment and Week 52. Sixty-seven patients had complete left ventricular mass z scores, decreasing at Week 52 (infantile-onset patients, change −0.8 ± 1.83; 95% confidence interval −1.3 to −0.2; all patients, change −0.5 ± 1.71; 95% confidence interval −1.0 to −0.1). Patients with “fraction of life” <0.79 had left ventricular mass z score decreasing (enrolment: +0.1 ± 3.0; Week 52: −1.1 ± 2.0); those with “fraction of life” ≥0.79 remained stable (enrolment: −0.9 ± 1.5; Week 52: −0.9 ± 1.4). Systolic blood pressure z scores were stable from enrolment to Week 52, and no cohort developed systemic hypertension. Eight patients had Wolff–Parkinson–White syndrome. Cardiac hypertrophy and dysrhythmia in ADVANCE patients at or before enrolment were typical of Pompe disease. Four-thousand L alglucosidase alfa therapy maintained fractional shortening, left ventricular posterior and septal end-diastolic thicknesses, and improved left ventricular mass z score.
Social Media Statement: Post hoc analyses of the ADVANCE study cohort of 113 children support ongoing cardiac monitoring and concomitant management of children with Pompe disease on long-term alglucosidase alfa to functionally improve cardiomyopathy and/or dysrhythmia.
The current study argues that population prevalence estimates for mental health disorders, or changes in mean scores over time, may not adequately reflect the heterogeneity in mental health response to the COVID-19 pandemic within the population.
Methods
The COVID-19 Psychological Research Consortium (C19PRC) Study is a longitudinal, nationally representative, online survey of UK adults. The current study analysed data from its first three waves of data collection: Wave 1 (March 2020, N = 2025), Wave 2 (April 2020, N = 1406) and Wave 3 (July 2020, N = 1166). Anxiety-depression was measured using the Patient Health Questionnaire Anxiety and Depression Scale (a composite measure of the PHQ-9 and GAD-7) and COVID-19-related posttraumatic stress disorder (PTSD) with the International Trauma Questionnaire. Changes in mental health outcomes were modelled across the three waves. Latent class growth analysis was used to identify subgroups of individuals with different trajectories of change in anxiety-depression and COVID-19 PTSD. Latent class membership was regressed on baseline characteristics.
Results
Overall prevalence of anxiety-depression remained stable, while COVID-19 PTSD reduced between Waves 2 and 3. Heterogeneity in mental health response was found, and hypothesised classes reflecting (i) stability, (ii) improvement and (iii) deterioration in mental health were identified. Psychological factors were most likely to differentiate the improving, deteriorating and high-stable classes from the low-stable mental health trajectories.
Conclusions
A low-stable profile characterised by little-to-no psychological distress (‘resilient’ class) was the most common trajectory for both anxiety-depression and COVID-19 PTSD. Monitoring these trajectories is necessary moving forward, in particular for the ~30% of individuals with increasing anxiety-depression levels.
The coronavirus disease 2019 (COVID-19) emergency has led to numerous attempts to assess the impact of the pandemic on population mental health. The findings indicate an increase in depression and anxiety but have been limited by the lack of specificity about which aspects of the pandemic (e.g. viral exposure or economic threats) have led to adverse mental health outcomes.
Methods
Network analyses were conducted on data from wave 1 (N = 2025, recruited 23 March–28 March 2020) and wave 2 (N = 1406, recontacts 22 April–1 May 2020) of the COVID-19 Psychological Research Consortium Study, an online longitudinal survey of a representative sample of the UK adult population. Our models included depression (PHQ-9), generalized anxiety (GAD-7) and trauma symptoms (ITQ); and measures of COVID-specific anxiety, exposure to the virus in self and close others, as well as economic loss due to the pandemic.
Results
A mixed graphical model at wave 1 identified a potential pathway from economic adversity to anxiety symptoms via COVID-specific anxiety. There was no association between viral exposure and symptoms. Ising network models using clinical cut-offs for symptom scores at each wave yielded similar findings, with the exception of a modest effect of viral exposure on trauma symptoms at wave 1 only. Anxiety and depression symptoms formed separate clusters at wave 1 but not wave 2.
Conclusions
The psychological impact of the pandemic evolved in the early phase of lockdown. COVID-related anxiety may represent the mechanism through which economic consequences of the pandemic are associated with psychiatric symptoms.
There is global interest in the reconfiguration of community mental health services, including primary care, to improve clinical and cost effectiveness.
Aims
This study seeks to describe patterns of service use, continuity of care, health risks, physical healthcare monitoring and the balance between primary and secondary mental healthcare for people with severe mental illness in receipt of secondary mental healthcare in the UK.
Method
We conducted an epidemiological medical records review in three UK sites. We identified 297 cases randomly selected from the three participating mental health services. Data were manually extracted from electronic patient medical records from both secondary and primary care, for a 2-year period (2012–2014). Continuous data were summarised by mean and s.d. or median and interquartile range (IQR). Categorical data were summarised as percentages.
Results
The majority of care was from secondary care practitioners: of the 18 210 direct contacts recorded, 76% were from secondary care (median, 36.5; IQR, 14–68) and 24% were from primary care (median, 10; IQR, 5–20). There was evidence of poor longitudinal continuity: in primary care, 31% of people had poor longitudinal continuity (Modified Modified Continuity Index ≤0.5), and 43% had a single named care coordinator in secondary care services over the 2 years.
Conclusions
The study indicates scope for improvement in supporting mental health service delivery in primary care. Greater knowledge of how care is organised presents an opportunity to ensure some rebalancing of the care that all people with severe mental illness receive, when they need it. A future publication will examine differences between the three sites that participated in this study.
The COVID-19 pandemic has created an unprecedented global crisis, necessitating drastic changes to living conditions, social life, personal freedom and economic activity. No study has yet examined the presence of psychiatric symptoms in the UK population under similar conditions.
Aims
We investigated the prevalence of COVID-19-related anxiety, generalised anxiety, depression and trauma symptoms in the UK population during an early phase of the pandemic, and estimated associations with variables likely to influence these symptoms.
Method
Between 23 and 28 March 2020, a quota sample of 2025 UK adults aged 18 years and older, stratified by age, gender and household income, was recruited by online survey company Qualtrics. Participants completed standardised measures of depression, generalised anxiety and trauma symptoms relating to the pandemic. Bivariate and multivariate associations were calculated for demographic and health-related variables.
Results
Higher levels of anxiety, depression and trauma symptoms were reported compared with previous population studies, but not dramatically so. Anxiety or depression and trauma symptoms were predicted by young age, presence of children in the home, and high estimates of personal risk. Anxiety and depression were also predicted by low income, loss of income and pre-existing health conditions in self and others. Specific anxiety about COVID-19 was greater in older participants.
Conclusions
This study showed a modest increase in the prevalence of mental health problems in the early stages of the pandemic, and these problems were predicted by several specific COVID-related variables. Further similar surveys, particularly of those with children at home, are required as the pandemic progresses.
In 2015, excavations at Stainton Quarry, Furness, Cumbria, recovered remains that provide a unique insight into Early Neolithic farming in the vicinity. Five pits, a post-hole, and deposits within a tree-throw and three crevices in a limestone outcrop were investigated. The latter deposits yielded potentially the largest assemblage of Carinated Bowl fragments yet recovered in Cumbria. Lipid analysis identified dairy fats within nine of these sherds. This was consistent with previous larger studies but represents the first evidence that dairying was an important component of Early Neolithic subsistence strategies in Cumbria. In addition, two deliberately broken polished stone axes, an Arran pitchstone core, a small number of flint tools and debitage, and a tuff flake were retrieved. The site also produced moderate amounts of charred grain, hazelnut shell, charcoal, and burnt bone. Most of the charred grain came from an Early Neolithic pit and potentially comprises the largest assemblage of such material recovered from Cumbria to date. Radiocarbon dating indicated activity sometime during the 40th–35th centuries cal bc as well as an earlier presence during the 46th–45th centuries. Later activity during the Chalcolithic and the Early Bronze Age was also demonstrated. The dense concentration of material and the fragmentary and abraded nature of the pottery suggested redeposition from an above-ground midden. Furthermore, the data recovered during the investigation has wider implications regarding the nature and use of the surrounding landscape during the Early Neolithic and suggests higher levels of settlement permanence, greater reliance on domesticated resources, and a possible different topographical focus for settlement than currently proposed.
Conservation resources are limited, yet an increasing number of species are under threat. Assessing species for their conservation needs is, therefore, a vital first step in identifying and prioritizing species for both ex situ and in situ conservation actions. Using a transparent, logical and objective method, the Conservation Needs Assessment process developed by Amphibian Ark uses current knowledge of species in the wild to determine those with the most pressing conservation needs, and provides a foundation for the development of holistic conservation action plans that combine in situ and ex situ actions as appropriate. These assessments allow us to maximize the impact of limited conservation resources by identifying which measures could best serve those species requiring help. The Conservation Needs Assessment complements the IUCN Red List assessment, and together they provide a more holistic guide to conservation priorities and actions. Conservation Needs Assessments generate national prioritized lists of species recommended for conservation action. These can subsequently be used to assist in the development of species recovery plans and national action plans, or to inform national conservation priorities better. Additional tools that will evaluate the recommendations for ex situ rescues, to determine the best candidates for conservation breeding programmes, are currently under development.
Branigan & Pickering (B&P) claim that the success of structural priming as a method should “end the current reliance on acceptability judgments.” Structural priming is an interesting and useful phenomenon, but we are dubious that the effect is powerful enough to test many detailed claims about specific points of syntactic theory.
This study investigated the impact of ACTAZIN™ green (2400 and 600 mg) and Livaux™ (2400 mg) gold kiwifruit supplements on faecal microbial composition and metabolites in healthy and functionally constipated (FC) participants. The participants were recruited into the healthy group (n 20; one of whom did not complete the study) and the FC group (n 9), each of whom consumed all the treatments and a placebo (isomalt) for 4 weeks in a randomised cross-over design interspersed with 2-week washout periods. Modification of faecal microbiota composition and metabolism was determined by 16S rRNA gene sequencing and GC, and colonic pH was calculated using SmartPill® wireless motility capsules. A total of thirty-two taxa were measured at greater than 1 % abundance in at least one sample, ten of which differed significantly between the baseline healthy and FC groups. Specifically, Bacteroidales and Roseburia spp. were significantly more abundant (P < 0·05) in the healthy group and taxa including Ruminococcaceae, Dorea spp. and Akkermansia spp. were significantly more abundant (P < 0·05) in the FC group. In the FC group, Faecalibacterium prausnitzii abundance significantly increased (P = 0·024) from 3·4 to 7·0 % following Livaux™ supplementation, with eight of the nine participants showing a net increase. Lower proportions of F. prausnitzii are often associated with gastrointestinal disorders. The discovery that Livaux™ supplementation increased F. prausnitzii abundance offers a potential strategy for improving gut microbiota composition, as F. prausnitzii is a butyrate producer and has also been shown to exert anti-inflammatory effects in many studies.
The objective of this research was to determine the potential use of commercially available multispectral images to detect weeds at low densities during the critical period of weed control. Common lambsquarters seedlings were transplanted into plots of glyphosate-resistant corn at 0, 1, 2, and 4 plants/m2 at two sites, Agronomy Center for Research and Extension (ACRE) and Meig's Horticultural Research Farm at the Throckmorton–Purdue Agricultural Center (TPAC), in Indiana. Aerial multispectral images (12 to 16 cm pixel resolution) were taken 18 and 32 days after planting (DAP) at ACRE and 19 and 32 DAP at TPAC. Corn and common lambsquarters could not be reliably detected and differentiated at either site when weeds were 9 cm or less in height. However, economic threshold densities (2 and 4 plants/m2) of common lambsquarters could be distinguished from weed-free plots at TPAC when weeds were 17 cm in height. At this height, common lambsquarters plants were beyond the optimal height for glyphosate application, but could still be readily controlled with higher rates. Results from this study indicate that commercially available multispectral aerial imagery at current spatial resolutions does not provide consistently reliable data for detection of early season weeds in glyphosate-resistant corn cropping systems. Additional refinement in sensor spatial and spectral resolution is necessary to increase our ability to successfully detect early season weed infestations.
Pale swallowwort (PSW) and black swallowwort (BSW) are two viney milkweeds native to Europe that have increasingly become problematic and noxious weeds in eastern North America. An indigenous fungal isolate, Sclerotium rolfsii VrNY, was discovered causing significant mortality in a dense stand of PSW in a park in upstate New York. Although this fungus is a known pathogen with a broad host range, we evaluated the host potential of S. rolfsii VrNY on a limited range of related and nonrelated U.S. species as a critical first step to assess its suitability as a mycoherbicide for PSW and BSW. In addition, PSW and BSW produce the specific stereoisomer (−)-antofine, a compound with antimicrobial and phytotoxic activity that could inhibit the pathogen. Tests revealed this compound had no effect on S. rolfsii VrNY. This isolate caused significant mortality on all broadleaf plants tested (Asclepias syriaca, Asclepias curassavica, Apocynum cannabinum, Monarda fistulosa, Rudbeckia hirta, PSW, BSW) with the exception of Glycine max, and had no effect on the monocots Schizachyrium scoparium and Zea mays. Although these laboratory studies indicate that most broadleaf vegetation may be susceptible to the pathogen, S. rolfsii might have potential as a mycoherbicide in natural eco-niche environments where invasive PSW and BSW have already become the predominant vegetation. Further laboratory testing of S. rolfsii and limited field testing at the initial discovery site are needed in order to prevent premature rejection of this isolate as a potential management tool against these highly invasive weeds.
The objective of this research was to assess the accuracy of remote sensing for detecting weed species in soybean based on two primary criteria: the presence or absence of weeds and the identification of individual weed species. Treatments included weeds (giant foxtail and velvetleaf) grown in monoculture or interseeded with soybean, bare ground, and weed-free soybean. Aerial multispectral digital images were collected at or near soybean canopy closure from two field sites in 2001. Weedy pixels (1.3 m2) were separated from weed-free soybean and bare ground with no more than 11% error, depending on the site. However, the classification of weed species varied between sites. At one site, velvetleaf and giant foxtail were classified with no more than 17% error, when monoculture and interseeded plots were combined. However, classification errors were as high as 39% for velvetleaf and 17% for giant foxtail at the other site. Our results support the idea that remote sensing has potential for weed detection in soybean, particularly when weed management systems do not require differentiation among weed species. Additional research is needed to characterize the effect of weed density or cover and crop–weed phenology on classification accuracies.
Aberrant microbiota composition and function have been linked to several pathologies, including type 2 diabetes. In animal models, prebiotics induce favourable changes in the intestinal microbiota, intestinal permeability (IP) and endotoxaemia, which are linked to concurrent improvement in glucose tolerance. This is the first study to investigate the link between IP, glucose tolerance and intestinal bacteria in human type 2 diabetes. In all, twenty-nine men with well-controlled type 2 diabetes were randomised to a prebiotic (galacto-oligosaccharide mixture) or placebo (maltodextrin) supplement (5·5 g/d for 12 weeks). Intestinal microbial community structure, IP, endotoxaemia, inflammatory markers and glucose tolerance were assessed at baseline and post intervention. IP was estimated by the urinary recovery of oral 51Cr-EDTA and glucose tolerance by insulin-modified intravenous glucose tolerance test. Intestinal microbial community analysis was performed by high-throughput next-generation sequencing of 16S rRNA amplicons and quantitative PCR. Prebiotic fibre supplementation had no significant effects on clinical outcomes or bacterial abundances compared with placebo; however, changes in the bacterial family Veillonellaceae correlated inversely with changes in glucose response and IL-6 levels (r −0·90, P=0·042 for both) following prebiotic intake. The absence of significant changes to the microbial community structure at a prebiotic dosage/length of supplementation shown to be effective in healthy individuals is an important finding. We propose that concurrent metformin treatment and the high heterogeneity of human type 2 diabetes may have played a significant role. The current study does not provide evidence for the role of prebiotics in the treatment of type 2 diabetes.
The thesis in this paper is that L2 speakers differ from L1 speakers in their ability to do memory storage and retrieval about linguistic structure. We would like to suggest it is possible to go farther than this thesis and develop a computational-level theory which explains why this mechanistic difference between L2 and L1 speakers exists. For this purpose, we believe a noisy channel model (Shannon, 1948; Levy, 2008; Levy, Bicknell, Slattery & Rayner, 2009; Gibson, Bergen & Piantadosi, 2013) could be a good start. Under the reasonable assumption that L2 speakers have a less precise probabilistic representation of the syntax of their L2 language than L1 speakers do, the noisy channel model straightforwardly predicts that L2 comprehenders will depend more on world knowledge and discourse factors when interpreting and recalling utterances (cf. Gibson, Sandberg, Fedorenko, Bergen & Kiran, 2015, for this assumption applied to language processing for persons with aphasia). Also, under the assumption that L2 speakers assume a higher error rate than L1 speakers do, the noisy channel model predicts that they will be more affected by alternative parses which are not directly compatible with the form of an utterance.