We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Changing practice patterns caused by the pandemic have created an urgent need for guidance in prescribing stimulants using telepsychiatry for attention-deficit hyperactivity disorder (ADHD). A notable spike in the prescribing of stimulants accompanied the suspension of the Ryan Haight Act, allowing the prescribing of stimulants without a face-to-face meeting. Competing forces both for and against prescribing ADHD stimulants by telepsychiatry have emerged, requiring guidelines to balance these factors. On the one hand, factors weighing in favor of increasing the availability of treatment for ADHD via telepsychiatry include enhanced access to care, reduction in the large number of untreated cases, and prevention of the known adverse outcomes of untreated ADHD. On the other hand, factors in favor of limiting telepsychiatry for ADHD include mitigating the possibility of exploiting telepsychiatry for profit or for misuse, abuse, and diversion of stimulants. This Expert Consensus Group has developed numerous specific guidelines and advocates for some flexibility in allowing telepsychiatry evaluations and treatment without an in-person evaluation to continue. These guidelines also recognize the need to give greater scrutiny to certain subpopulations, such as young adults without a prior diagnosis or treatment of ADHD who request immediate-release stimulants, which should increase the suspicion of possible medication diversion, misuse, or abuse. In such cases, nonstimulants, controlled-release stimulants, or psychosocial interventions should be prioritized. We encourage the use of outside informants to support the history, the use of rating scales, and having access to a hybrid model of both in-person and remote treatment.
Response to lithium in patients with bipolar disorder is associated with clinical and transdiagnostic genetic factors. The predictive combination of these variables might help clinicians better predict which patients will respond to lithium treatment.
Aims
To use a combination of transdiagnostic genetic and clinical factors to predict lithium response in patients with bipolar disorder.
Method
This study utilised genetic and clinical data (n = 1034) collected as part of the International Consortium on Lithium Genetics (ConLi+Gen) project. Polygenic risk scores (PRS) were computed for schizophrenia and major depressive disorder, and then combined with clinical variables using a cross-validated machine-learning regression approach. Unimodal, multimodal and genetically stratified models were trained and validated using ridge, elastic net and random forest regression on 692 patients with bipolar disorder from ten study sites using leave-site-out cross-validation. All models were then tested on an independent test set of 342 patients. The best performing models were then tested in a classification framework.
Results
The best performing linear model explained 5.1% (P = 0.0001) of variance in lithium response and was composed of clinical variables, PRS variables and interaction terms between them. The best performing non-linear model used only clinical variables and explained 8.1% (P = 0.0001) of variance in lithium response. A priori genomic stratification improved non-linear model performance to 13.7% (P = 0.0001) and improved the binary classification of lithium response. This model stratified patients based on their meta-polygenic loadings for major depressive disorder and schizophrenia and was then trained using clinical data.
Conclusions
Using PRS to first stratify patients genetically and then train machine-learning models with clinical predictors led to large improvements in lithium response prediction. When used with other PRS and biological markers in the future this approach may help inform which patients are most likely to respond to lithium treatment.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
Aims
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Method
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Results
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
Conclusions
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
The Randolph Glacier Inventory (RGI) is a globally complete collection of digital outlines of glaciers, excluding the ice sheets, developed to meet the needs of the Fifth Assessment of the Intergovernmental Panel on Climate Change for estimates of past and future mass balance. The RGI was created with limited resources in a short period. Priority was given to completeness of coverage, but a limited, uniform set of attributes is attached to each of the ~198 000 glaciers in its latest version, 3.2. Satellite imagery from 1999–2010 provided most of the outlines. Their total extent is estimated as 726 800 ± 34 000 km2. The uncertainty, about ±5%, is derived from careful single-glacier and basin-scale uncertainty estimates and comparisons with inventories that were not sources for the RGI. The main contributors to uncertainty are probably misinterpretation of seasonal snow cover and debris cover. These errors appear not to be normally distributed, and quantifying them reliably is an unsolved problem. Combined with digital elevation models, the RGI glacier outlines yield hypsometries that can be combined with atmospheric data or model outputs for analysis of the impacts of climatic change on glaciers. The RGI has already proved its value in the generation of significantly improved aggregate estimates of glacier mass changes and total volume, and thus actual and potential contributions to sea-level rise.
Field experiments were conducted in 1997 and 1998 to evaluate the effect of mowing followed by hexazinone for West Indian dropseed/giant smutgrass (Sporobolus indicus var. pyramidalis) (hereafter referred to as dropseed) control. The experimental design was a split plot, with mowing (nonmowed [mature] and 35-cm regrowth) as the whole plot and hexazinone rate (0.0 [control], 0.56, 0.84, 1.12, 1.40, and 1.68 kg ai/ha) as the subplot treatments. The application of 0.84 kg/ha hexazinone provided 94 and 81% dropseed control, 365 d after treatment (DAT) during 1997 and 1998, respectively. Increasing application rate to 1.12 kg/ha hexazinone provided 87 and 88% dropseed control, 365 DAT during 1997 and 1998, respectively. Both the 0.84 and 1.12 kg/ha rates provided the same average control (87.5%); however, the 1.12 kg/ha rate provided consistent control over years. Mowing dropseed, followed by hexazinone application at 35-cm regrowth, provided no additional control when compared with no mowing treatments. Rates of hexazinone at 1.40 and 1.68 kg/ha caused phytotoxicity to bahiagrass and increased bare soil surface area, especially 90 and 120 DAT. Phytotoxic effect on bahiagrass and on bare soil decreased 365 DAT, resulting in 75 to 80% total forage cover. Concentration and yield of total nonstructural carbohydrates were significantly lower for the mowed 35-cm regrowth treatment than for the nonmowed plants; however, even in its weakened condition this reduction had no effect on dropseed control.
In a series of articles in this journal, Wes Morriston has launched what can only be considered a full-scale assault on the divine command theory (DCT) of morality. According to Morriston, proponents of this theory are committed to an alarming counterpossible: that if God did command an annual human sacrifice, it would be morally obligatory. Since only a ‘terrible’ deity would do such a ‘terrible’ thing, we should reject DCT. Indeed, if there were such a deity, the world would be a terrible place – certainly far worse than it is. We argue that Morriston's non-standard method for assessing counterpossibles of this sort is flawed. Not only is the savvy DCT-ist at liberty to reject it, but Morriston's method badly misfires in the face of theistic activism – a metaphysical platform available to DCT-ists, according to which if God didn't exist, neither would anything else.
First, to evaluate the ability of a short dietary questionnaire (SDQ) to estimate energy intake (EI) on group and individual levels compared with total energy expenditure (TEE) measured by the doubly labelled water method. Second, to compare the SDQ's performance in estimating energy, nutrient and food intakes with a sixty-six-item FFQ used in large-scale Swedish epidemiological research.
Design
Cross-sectional.
Setting
Umeå, Sweden.
Subjects
In total, sixty-five non-pregnant women, of whom thirty-one were overweight or obese, and twenty-five pregnant, normal-weight women completed the protocol.
Results
On average, the SDQ captured 78 % and 79 % of absolute TEE in the non-pregnant and pregnant normal-weight women, respectively. Furthermore, the SDQ captured an average of 57 % of TEE in the overweight/obese non-pregnant women. The Spearman correlation of EI and TEE was significant in the overweight and obese women only (ρ = 0·37, 95 % CI 0·02, 0·64). There was no significant difference between the SDQ and the more extensive FFQ in the ability to assess EI when compared with TEE. Intakes of most nutrients and foods were significantly higher when assessed with the SDQ compared with the FFQ.
Conclusions
A new short dietary questionnaire with an alternative design underestimated EI of non-pregnant and pregnant, overweight and obese women on a group level but was able to rank the overweight/obese women according to EI. Furthermore, the short questionnaire captured as much or more of the energy, nutrient and food intakes of non-pregnant normal-weight and overweight/obese women on the group level as a traditional, more extensive FFQ.
Studies have suggested that moderate alcohol consumption is associated with a reduced risk of CVD and premature mortality in individuals with diabetes mellitus. However, history of alcohol consumption has hardly been taken into account. We investigated the association between current alcohol consumption and mortality in men and women with diabetes mellitus accounting for past alcohol consumption. Within the European Prospective Investigation into Cancer and Nutrition (EPIC), a cohort was defined of 4797 participants with a confirmed diagnosis of diabetes mellitus. Men and women were assigned to categories of baseline and past alcohol consumption. Hazard ratios (HR) and 95 % CI for total mortality were estimated with multivariable Cox regression models, using light alcohol consumption (>0–6 g/d) as the reference category. Compared with light alcohol consumption, no relationship was observed between consumption of 6 g/d or more and total mortality. HR for >6–12 g/d was 0·89 (95 % CI 0·61, 1·30) in men and 0·86 (95 % CI 0·46, 1·60) in women. Adjustment for past alcohol consumption did not change the estimates substantially. In individuals who at baseline reported abstaining from alcohol, mortality rates were increased relative to light consumers: HR was 1·52 (95 % CI 0·99, 2·35) in men and 1·81 (95 % CI 1·04, 3·17) in women. The present study in diabetic individuals showed no association between current alcohol consumption >6 g/d and mortality risk compared with light consumption. The increased mortality risk among non-consumers appeared to be affected by their past alcohol consumption rather than their current abstinence.