We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
Estimates suggest that 1 in 100 people in the UK live with facial scarring. Despite this incidence, psychological support is limited.
Aims
The aim of this study was to strengthen the case for improving such support by determining the incidence and risk factors for anxiety and depression disorders in patients with facial scarring.
Method
A matched cohort study was performed. Patients were identified via secondary care data sources, using clinical codes for conditions resulting in facial scarring. A diagnosis of anxiety or depression was determined by linkage with the patient's primary care general practice data. Incidence was calculated per 1000 person-years at risk (PYAR). Logistic regression was used to determine risk factors.
Results
Between 2009 and 2018, 179 079 patients met the study criteria and were identified as having a facial scar, and matched to 179 079 controls. The incidence of anxiety in the facial scarring group was 10.05 per 1000 PYAR compared with 7.48 per 1000 PYAR for controls. The incidence of depression in the facial scarring group was 16.28 per 1000 PYAR compared with 9.56 per 1000 PYAR for controls. Age at the time of scarring, previous history of anxiety or depression, female gender, socioeconomic status and classification of scarring increased the risk of both anxiety disorders and depression.
Conclusions
There is a high burden of anxiety disorders and depression in this patient group. Risk of these mental health disorders is very much determined by factors apparent at the time of injury, supporting the need for psychological support.
Current psychiatric diagnoses, although heritable, have not been clearly mapped onto distinct underlying pathogenic processes. The same symptoms often occur in multiple disorders, and a substantial proportion of both genetic and environmental risk factors are shared across disorders. However, the relationship between shared symptoms and shared genetic liability is still poorly understood.
Aims
Well-characterised, cross-disorder samples are needed to investigate this matter, but few currently exist. Our aim is to develop procedures to purposely curate and aggregate genotypic and phenotypic data in psychiatric research.
Method
As part of the Cardiff MRC Mental Health Data Pathfinder initiative, we have curated and harmonised phenotypic and genetic information from 15 studies to create a new data repository, DRAGON-Data. To date, DRAGON-Data includes over 45 000 individuals: adults and children with neurodevelopmental or psychiatric diagnoses, affected probands within collected families and individuals who carry a known neurodevelopmental risk copy number variant.
Results
We have processed the available phenotype information to derive core variables that can be reliably analysed across groups. In addition, all data-sets with genotype information have undergone rigorous quality control, imputation, copy number variant calling and polygenic score generation.
Conclusions
DRAGON-Data combines genetic and non-genetic information, and is available as a resource for research across traditional psychiatric diagnostic categories. Algorithms and pipelines used for data harmonisation are currently publicly available for the scientific community, and an appropriate data-sharing protocol will be developed as part of ongoing projects (DATAMIND) in partnership with Health Data Research UK.
COVID-19 vaccines are likely to be scarce for years to come. Many countries, from India to the U.K., have demonstrated vaccine nationalism. What are the ethical limits to this vaccine nationalism? Neither extreme nationalism nor extreme cosmopolitanism is ethically justifiable. Instead, we propose the fair priority for residents (FPR) framework, in which governments can retain COVID-19 vaccine doses for their residents only to the extent that they are needed to maintain a noncrisis level of mortality while they are implementing reasonable public health interventions. Practically, a noncrisis level of mortality is that experienced during a bad influenza season, which society considers an acceptable background risk. Governments take action to limit mortality from influenza, but there is no emergency that includes severe lockdowns. This “flu-risk standard” is a nonarbitrary and generally accepted heuristic. Mortality above the flu-risk standard justifies greater governmental interventions, including retaining vaccines for a country's own citizens over global need. The precise level of vaccination needed to meet the flu-risk standard will depend upon empirical factors related to the pandemic. This links the ethical principles to the scientific data emerging from the emergency. Thus, the FPR framework recognizes that governments should prioritize procuring vaccines for their country when doing so is necessary to reduce mortality to noncrisis flu-like levels. But after that, a government is obligated to do its part to share vaccines to reduce risks of mortality for people in other countries. We consider and reject objections to the FPR framework based on a country: (1) having developed a vaccine, (2) raising taxes to pay for vaccine research and purchase, (3) wanting to eliminate economic and social burdens, and (4) being ineffective in combating COVID-19 through public health interventions.
Seeman, Morris, and Summers misrepresent or misunderstand the arguments we have made, as well as their own previous work. Here, we correct these inaccuracies. We also reiterate our support for hypothesis-driven and evidence-based research.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
Aims
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Method
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Results
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
Conclusions
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
Registry-based trials have emerged as a potentially cost-saving study methodology. Early estimates of cost savings, however, conflated the benefits associated with registry utilisation and those associated with other aspects of pragmatic trial designs, which might not all be as broadly applicable. In this study, we sought to build a practical tool that investigators could use across disciplines to estimate the ranges of potential cost differences associated with implementing registry-based trials versus standard clinical trials.
Methods:
We built simulation Markov models to compare unique costs associated with data acquisition, cleaning, and linkage under a registry-based trial design versus a standard clinical trial. We conducted one-way, two-way, and probabilistic sensitivity analyses, varying study characteristics over broad ranges, to determine thresholds at which investigators might optimally select each trial design.
Results:
Registry-based trials were more cost effective than standard clinical trials 98.6% of the time. Data-related cost savings ranged from $4300 to $600,000 with variation in study characteristics. Cost differences were most reactive to the number of patients in a study, the number of data elements per patient available in a registry, and the speed with which research coordinators could manually abstract data. Registry incorporation resulted in cost savings when as few as 3768 independent data elements were available and when manual data abstraction took as little as 3.4 seconds per data field.
Conclusions:
Registries offer important resources for investigators. When available, their broad incorporation may help the scientific community reduce the costs of clinical investigation. We offer here a practical tool for investigators to assess potential costs savings.
Seven half-day regional listening sessions were held between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide-resistance management. The objective of the listening sessions was to connect with stakeholders and hear their challenges and recommendations for addressing herbicide resistance. The coordinating team hired Strategic Conservation Solutions, LLC, to facilitate all the sessions. They and the coordinating team used in-person meetings, teleconferences, and email to communicate and coordinate the activities leading up to each regional listening session. The agenda was the same across all sessions and included small-group discussions followed by reporting to the full group for discussion. The planning process was the same across all the sessions, although the selection of venue, time of day, and stakeholder participants differed to accommodate the differences among regions. The listening-session format required a great deal of work and flexibility on the part of the coordinating team and regional coordinators. Overall, the participant evaluations from the sessions were positive, with participants expressing appreciation that they were asked for their thoughts on the subject of herbicide resistance. This paper details the methods and processes used to conduct these regional listening sessions and provides an assessment of the strengths and limitations of those processes.
Herbicide resistance is ‘wicked’ in nature; therefore, results of the many educational efforts to encourage diversification of weed control practices in the United States have been mixed. It is clear that we do not sufficiently understand the totality of the grassroots obstacles, concerns, challenges, and specific solutions needed for varied crop production systems. Weed management issues and solutions vary with such variables as management styles, regions, cropping systems, and available or affordable technologies. Therefore, to help the weed science community better understand the needs and ideas of those directly dealing with herbicide resistance, seven half-day regional listening sessions were held across the United States between December 2016 and April 2017 with groups of diverse stakeholders on the issues and potential solutions for herbicide resistance management. The major goals of the sessions were to gain an understanding of stakeholders and their goals and concerns related to herbicide resistance management, to become familiar with regional differences, and to identify decision maker needs to address herbicide resistance. The messages shared by listening-session participants could be summarized by six themes: we need new herbicides; there is no need for more regulation; there is a need for more education, especially for others who were not present; diversity is hard; the agricultural economy makes it difficult to make changes; and we are aware of herbicide resistance but are managing it. The authors concluded that more work is needed to bring a community-wide, interdisciplinary approach to understanding the complexity of managing weeds within the context of the whole farm operation and for communicating the need to address herbicide resistance.
The Pueblo population of Chaco Canyon during the Bonito Phase (AD 800–1130) employed agricultural strategies and water-management systems to enhance food cultivation in this unpredictable environment. Scepticism concerning the timing and effectiveness of this system, however, remains common. Using optically stimulated luminescence dating of sediments and LiDAR imaging, the authors located Bonito Phase canal features at the far west end of the canyon. Additional ED-XRF and strontium isotope (87Sr/86Sr) analyses confirm the diversion of waters from multiple sources during Chaco’s occupation. The extent of this water-management system raises new questions about social organisation and the role of ritual in facilitating responses to environmental unpredictability.
BACKGROUND: IGTS is a rare phenomenon of paradoxical germ cell tumor (GCT) growth during or following treatment despite normalization of tumor markers. We sought to evaluate the frequency, clinical characteristics and outcome of IGTS in patients in 21 North-American and Australian institutions. METHODS: Patients with IGTS diagnosed from 2000-2017 were retrospectively evaluated. RESULTS: Out of 739 GCT diagnoses, IGTS was identified in 33 patients (4.5%). IGTS occurred in 9/191 (4.7%) mixed-malignant GCTs, 4/22 (18.2%) immature teratomas (ITs), 3/472 (0.6%) germinomas/germinomas with mature teratoma, and in 17 secreting non-biopsied tumours. Median age at GCT diagnosis was 10.9 years (range 1.8-19.4). Male gender (84%) and pineal location (88%) predominated. Of 27 patients with elevated markers, median serum AFP and Beta-HCG were 70 ng/mL (range 9.2-932) and 44 IU/L (range 4.2-493), respectively. IGTS occurred at a median time of 2 months (range 0.5-32) from diagnosis, during chemotherapy in 85%, radiation in 3%, and after treatment completion in 12%. Surgical resection was attempted in all, leading to gross total resection in 76%. Most patients (79%) resumed GCT chemotherapy/radiation after surgery. At a median follow-up of 5.3 years (range 0.3-12), all but 2 patients are alive (1 succumbed to progressive disease, 1 to malignant transformation of GCT). CONCLUSION: IGTS occurred in less than 5% of patients with GCT and most commonly after initiation of chemotherapy. IGTS was more common in patients with IT-only on biopsy than with mixed-malignant GCT. Surgical resection is a principal treatment modality. Survival outcomes for patients who developed IGTS are favourable.
The longstanding association between the major histocompatibility complex (MHC) locus and schizophrenia (SZ) risk has recently been accounted for, partially, by structural variation at the complement component 4 (C4) gene. This structural variation generates varying levels of C4 RNA expression, and genetic information from the MHC region can now be used to predict C4 RNA expression in the brain. Increased predicted C4A RNA expression is associated with the risk of SZ, and C4 is reported to influence synaptic pruning in animal models.
Methods
Based on our previous studies associating MHC SZ risk variants with poorer memory performance, we tested whether increased predicted C4A RNA expression was associated with reduced memory function in a large (n = 1238) dataset of psychosis cases and healthy participants, and with altered task-dependent cortical activation in a subset of these samples.
Results
We observed that increased predicted C4A RNA expression predicted poorer performance on measures of memory recall (p = 0.016, corrected). Furthermore, in healthy participants, we found that increased predicted C4A RNA expression was associated with a pattern of reduced cortical activity in middle temporal cortex during a measure of visual processing (p < 0.05, corrected).
Conclusions
These data suggest that the effects of C4 on cognition were observable at both a cortical and behavioural level, and may represent one mechanism by which illness risk is mediated. As such, deficits in learning and memory may represent a therapeutic target for new molecular developments aimed at altering C4’s developmental role.
Schizophrenia is a highly heritable disorder, linked to several structural abnormalities of the brain. More specifically, previous findings have suggested that increased gyrification in frontal and temporal regions are implicated in the pathogenesis of schizophrenia.
Methods
The current study included participants at high familial risk of schizophrenia who remained well (n = 31), who developed sub-diagnostic symptoms (n = 28) and who developed schizophrenia (n = 9) as well as healthy controls (HC) (n = 16). We first tested whether individuals at high familial risk of schizophrenia carried an increased burden of trait-associated alleles using polygenic risk score analysis. We then assessed the extent to which polygenic risk was associated with gyral folding in the frontal and temporal lobes.
Results
We found that individuals at high familial risk of schizophrenia who developed schizophrenia carried a significantly greater burden of risk-conferring variants for the disorder compared to those at high risk (HR) who developed sub-diagnostic symptoms or remained well and HC. Furthermore, within the HR cohort, there was a significant and positive association between schizophrenia polygenic risk score and bilateral frontal gyrification.
Conclusions
These results suggest that polygenic risk for schizophrenia impacts upon early neurodevelopment to confer greater gyral folding in adulthood and an increased risk of developing the disorder.
This paper reviews some of the research that has been carried out at the University of Liverpool where the Flight Science and Technology Research Group has developed its Heliflight-R full-motion research simulator to create a simulation environment for the launch and recovery of maritime helicopters to ships. HELIFLIGHT-R has been used to conduct flight trials to produce simulated Ship-Helicopter Operating Limits (SHOLs). This virtual engineering approach has led to a much greater understanding of how the dynamic interface between the ship and the helicopter contributes to the pilot's workload and the aircraft's handling qualities and will inform the conduct of future real-world SHOL trials. The paper also describes how modelling and simulation has been applied to the design of a ship's superstructure to improve the aerodynamic flow field in which the helicopter has to operate. The superstructure aerodynamics also affects the placement of the ship's anemometers and the dispersion of the ship's hot exhaust gases, both of which affect the operational envelope of the helicopter, and both of which can be investigated through simulation.
Trifluralin was evaluated at 1.1, 2.2, and 4.5 kg/ha in 1983 and 1984 at two locations in Iowa for residue carryover injury to corn the following seasons. Three methods of seedbed preparation (no-till, moldboard, and chisel plowing) for corn planting were also examined. There was no effect on corn growth at the 1.1 kg/ha rate of trifluralin. Averaged over the four experiments, reductions in corn height of 8 and 24% were observed 5 weeks after planting at 2.2 and 4.5 kg/ha, respectively. The relative degree of stunting due to trifluralin decreased as the growing season progressed. Early-season carryover injury was more severe in reduced tillage than in moldboard plow treatments in the 1983-84 Nashua experiment. Moldboard and chisel plowing reduced the concentration of trifluralin in the 0- to 7.5-cm zone of the soil profile by 62 and 31%, respectively, when compared to no-till. No yield reductions were observed at the 1.1 or 2.2 kg/ha rate of trifluralin. In 1984, grain yields were reduced by 8 and 16% at Ames and Nashua, respectively, by the 4.5 kg/ha trifluralin rate.
Giant foxtail, woolly cupgrass, and wild-proso millet infest millions of hectares of land devoted to corn production in the midwestern U.S. Control of these species and effects on corn grain yield were evaluated at various timings using POST applications of nicosulfuron vs. applications of various PRE herbicides at 17 locations across the midwestern U.S. in 1992 and 1993. Nicosulfuron applied to 5 to 10 cm giant foxtail and woolly cupgrass provided greater control than that observed with selected PRE herbicides. Giant foxtail control with nicosulfuron averaged 88%, and control of woolly cupgrass averaged 77% across all sites. Nicosulfuron, applied to 5 to 10 cm wild-proso millet, provided a level of control similar to that of selected PRE herbicides. Corn grain yield was greater when giant foxtail was controlled POST with nicosulfuron vs. PRE control with selected soil-applied herbicides. Corn grain yields were similar when nicosulfuron was applied POST to 5 to 10 cm woolly cupgrass or wild-proso millet vs. PRE control of these grass weeds. Across a broad range of geographical locations, nicosulfuron, applied POST to 5 to 10 cm tall grass, provided greater or similar levels of weed control vs. the selected PRE herbicides, with no deleterious effect on grain yield.
To achieve their conservation goals individuals, communities and organizations need to acquire a diversity of skills, knowledge and information (i.e. capacity). Despite current efforts to build and maintain appropriate levels of conservation capacity, it has been recognized that there will need to be a significant scaling-up of these activities in sub-Saharan Africa. This is because of the rapid increase in the number and extent of environmental problems in the region. We present a range of socio-economic contexts relevant to four key areas of African conservation capacity building: protected area management, community engagement, effective leadership, and professional e-learning. Under these core themes, 39 specific recommendations are presented. These were derived from multi-stakeholder workshop discussions at an international conference held in Nairobi, Kenya, in 2015. At the meeting 185 delegates (practitioners, scientists, community groups and government agencies) represented 105 organizations from 24 African nations and eight non-African nations. The 39 recommendations constituted six broad types of suggested action: (1) the development of new methods, (2) the provision of capacity building resources (e.g. information or data), (3) the communication of ideas or examples of successful initiatives, (4) the implementation of new research or gap analyses, (5) the establishment of new structures within and between organizations, and (6) the development of new partnerships. A number of cross-cutting issues also emerged from the discussions: the need for a greater sense of urgency in developing capacity building activities; the need to develop novel capacity building methodologies; and the need to move away from one-size-fits-all approaches.