We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Diversifying the simplified landscape of corn and soybeans in the Midwest is an emerging priority in both the public and private sectors to reap a suite of climate, social, agronomic, and economic benefits. However, little research has documented the perspectives of farmers, the primary stakeholders in diversification efforts. This preliminary report uses newly collected survey data (n = 725) from farmers in the states of Illinois, Indiana, and Iowa to provide descriptive statistics and tests to understand what farmers in the region think about agricultural diversification, including their perspectives on its benefits, barriers, and opportunities. For the purposes of the study, we define diversification as extended rotations, perennials, horticulture, grazed livestock, and agroforestry practices. We find that a majority or plurality of farmers in the sample believe that diversified systems are superior to non-diversified systems at achieving a range of environmental, agronomic, and economic goals, although many farmers are still forming opinions. Farmers believe that primarily economic barriers stand in the way of diversification, including the lack of affordable land, low short-term returns on investment, and lack of labor. Farmers identified key opportunities to increase diversification through developing processing capacity for local meat and specialty crops, increasing demand for diversified products, and providing more information on returns on investment of diversified systems. Different interventions, however, may be needed to support farmers who are already diversified compared to non-diversified farmers. Building on these initial results, future studies using these data will develop more detailed analyses and recommendations for policymakers, the private sector, and agricultural organizations to support diversification.
Cohort studies demonstrate that people who later develop schizophrenia, on average, present with mild cognitive deficits in childhood and endure a decline in adolescence and adulthood. Yet, tremendous heterogeneity exists during the course of psychotic disorders, including the prodromal period. Individuals identified to be in this period (known as CHR-P) are at heightened risk for developing psychosis (~35%) and begin to exhibit cognitive deficits. Cognitive impairments in CHR-P (as a singular group) appear to be relatively stable or ameliorate over time. A sizeable proportion has been described to decline on measures related to processing speed or verbal learning. The purpose of this analysis is to use data-driven approaches to identify latent subgroups among CHR-P based on cognitive trajectories. This will yield a clearer understanding of the timing and presentation of both general and domain-specific deficits.
Participants and Methods:
Participants included 684 young people at CHR-P (ages 12–35) from the second cohort of the North American Prodromal Longitudinal Study. Performance on the MATRICS Consensus Cognitive Battery (MCCB) and the Wechsler Abbreviated Scale of Intelligence (WASI-I) was assessed at baseline, 12-, and 24-months. Tested MCCB domains include verbal learning, speed of processing, working memory, and reasoning & problem-solving. Sex- and age-based norms were utilized. The Oral Reading subtest on the Wide Range Achievement Test (WRAT4) indexed pre-morbid IQ at baseline. Latent class mixture models were used to identify distinct trajectories of cognitive performance across two years. One- to 5-class solutions were compared to decide the best solution. This determination depended on goodness-of-fit metrics, interpretability of latent trajectories, and proportion of subgroup membership (>5%).
Results:
A one-class solution was found for WASI-I Full-Scale IQ, as people at CHR-P predominantly demonstrated an average IQ that increased gradually over time. For individual domains, one-class solutions also best fit the trajectories for speed of processing, verbal learning, and working memory domains. Two distinct subgroups were identified on one of the executive functioning domains, reasoning and problem-solving (NAB Mazes). The sample divided into unimpaired performance with mild improvement over time (Class I, 74%) and persistent performance two standard deviations below average (Class II, 26%). Between these classes, no significant differences were found for biological sex, age, years of education, or likelihood of conversion to psychosis (OR = 1.68, 95% CI 0.86 to 3.14). Individuals assigned to Class II did demonstrate a lower WASI-I IQ at baseline (96.3 vs. 106.3) and a lower premorbid IQ (100.8 vs. 106.2).
Conclusions:
Youth at CHR-P demonstrate relatively homogeneous trajectories across time in terms of general cognition and most individual domains. In contrast, two distinct subgroups were observed with higher cognitive skills involving planning and foresight, and they notably exist independent of conversion outcome. Overall, these findings replicate and extend results from a recently published latent class analysis that examined 12-month trajectories among CHR-P using a different cognitive battery (Allott et al., 2022). Findings inform which individuals at CHR-P may be most likely to benefit from cognitive remediation and can inform about the substrates of deficits by establishing meaningful subtypes.
Clinical implementation of risk calculator models in the clinical high-risk for psychosis (CHR-P) population has been hindered by heterogeneous risk distributions across study cohorts which could be attributed to pre-ascertainment illness progression. To examine this, we tested whether the duration of attenuated psychotic symptom (APS) worsening prior to baseline moderated performance of the North American prodrome longitudinal study 2 (NAPLS2) risk calculator. We also examined whether rates of cortical thinning, another marker of illness progression, bolstered clinical prediction models.
Methods
Participants from both the NAPLS2 and NAPLS3 samples were classified as either ‘long’ or ‘short’ symptom duration based on time since APS increase prior to baseline. The NAPLS2 risk calculator model was applied to each of these groups. In a subset of NAPLS3 participants who completed follow-up magnetic resonance imaging scans, change in cortical thickness was combined with the individual risk score to predict conversion to psychosis.
Results
The risk calculator models achieved similar performance across the combined NAPLS2/NAPLS3 sample [area under the curve (AUC) = 0.69], the long duration group (AUC = 0.71), and the short duration group (AUC = 0.71). The shorter duration group was younger and had higher baseline APS than the longer duration group. The addition of cortical thinning improved the prediction of conversion significantly for the short duration group (AUC = 0.84), with a moderate improvement in prediction for the longer duration group (AUC = 0.78).
Conclusions
These results suggest that early illness progression differs among CHR-P patients, is detectable with both clinical and neuroimaging measures, and could play an essential role in the prediction of clinical outcomes.
Hemorrhage control prior to shock onset is increasingly recognized as a time-critical intervention. Although tourniquets (TQs) have been demonstrated to save lives, less is known about the physiologic parameters underlying successful TQ application beyond palpation of distal pulses. The current study directly visualized distal arterial occlusion via ultrasonography and measured associated pressure and contact force.
Methods:
Fifteen tactical officers participated as live models for the study. Arterial occlusion was performed using a standard adult blood pressure (BP) cuff and a Combat Application Tourniquet Generation 7 (CAT7) TQ, applied sequentially to the left mid-bicep. Arterial flow cessation was determined by radial artery palpation and brachial artery pulsed wave doppler ultrasound (US) evaluation. Steady state maximal generated force was measured using a thin-film force sensor.
Results:
The mean (95% CI) systolic blood pressure (SBP) required to occlude palpable distal pulse was 112.9mmHg (109-117); contact force was 23.8N [Newton] (22.0-25.6). Arterial flow was visible via US in 100% of subjects despite lack of palpable pulse. The mean (95% CI) SBP and contact force to eliminate US flow were 132mmHg (127-137) and 27.7N (25.1-30.3). The mean (95% CI) number of windlass turns to eliminate a palpable pulse was 1.3 (1.0-1.6) while 1.6 (1.2-1.9) turns were required to eliminate US flow.
Conclusions:
Loss of distal radial pulse does not indicate lack of arterial flow distal to upper extremity TQ. On average, an additional one-quarter windlass turn was required to eliminate distal flow. Blood pressure and force measurements derived in this study may provide data to guide future TQ designs and inexpensive, physiologically accurate TQ training models.
Area-level residential instability (ARI), an index of social fragmentation, has been shown to explain the association between urbanicity and psychosis. Urban upbringing has been shown to be associated with decreased gray matter volumes (GMV)s of brain regions corresponding to the right caudal middle frontal gyrus (CMFG) and rostral anterior cingulate cortex (rACC).
Objectives
We hypothesize that greater ARI will be associated with reduced right posterior CMFG and rACC GMVs.
Methods
Data were collected at baseline as part of the North American Prodrome Longitudinal Study. Counties where participants resided during childhood were geographically coded using the US Censuses to area-level factors. ARI was defined as the percentage of residents living in a different house five years ago. Generalized linear mixed models tested associations between ARI and GMVs.
Results
This study included 29 HC and 64 CHR-P individuals who were aged 12 to 24 years, had remained in their baseline residential area, and had magnetic resonance imaging scans. ARI was associated with reduced right CMFG (adjusted β = -0.258; 95% CI = -0.502 – -0.015) and right rACC volumes (adjusted β = -0.318; 95% CI = -0.612 – -0.023). The interaction terms (ARI X diagnostic group) in the prediction of both brain regions were not significant, indicating that the relationships between ARI and regional brain volumes held for both CHR-P and HCs.
Conclusions
Like urban upbringing, ARI may be an important social environmental characteristic that adversely impacts brain regions related to schizophrenia.
Behaviors typical of body-focused repetitive behavior disorders such as trichotillomania (TTM) and skin-picking disorder (SPD) are often associated with pleasure or relief, and with little or no physical pain, suggesting aberrant pain perception. Conclusive evidence about pain perception and correlates in these conditions is, however, lacking.
Methods
A multisite international study examined pain perception and its physiological correlates in adults with TTM (n = 31), SPD (n = 24), and healthy controls (HCs; n = 26). The cold pressor test was administered, and measurements of pain perception and cardiovascular parameters were taken every 15 seconds. Pain perception, latency to pain tolerance, cardiovascular parameters and associations with illness severity, and comorbid depression, as well as interaction effects (group × time interval), were investigated across groups.
Results
There were no group differences in pain ratings over time (P = .8) or latency to pain tolerance (P = .8). Illness severity was not associated with pain ratings (all P > .05). In terms of diastolic blood pressure (DBP), the main effect of group was statistically significant (P = .01), with post hoc analyses indicating higher mean DBP in TTM (95% confidence intervals [CI], 84.0-93.5) compared to SPD (95% CI, 73.5-84.2; P = .01), and HCs (95% CI, 75.6-86.0; P = .03). Pain perception did not differ between those with and those without depression (TTM: P = .2, SPD: P = .4).
Conclusion
The study findings were mostly negative suggesting that general pain perception aberration is not involved in TTM and SPD. Other underlying drivers of hair-pulling and skin-picking behavior (eg, abnormal reward processing) should be investigated.
While comorbidity of clinical high-risk for psychosis (CHR-P) status and social anxiety is well-established, it remains unclear how social anxiety and positive symptoms covary over time in this population. The present study aimed to determine whether there are more than one covariant trajectory of social anxiety and positive symptoms in the North American Prodrome Longitudinal Study cohort (NAPLS 2) and, if so, to test whether the different trajectory subgroups differ in terms of genetic and environmental risk factors for psychotic disorders and general functional outcome.
Methods
In total, 764 CHR individuals were evaluated at baseline for social anxiety and psychosis risk symptom severity and followed up every 6 months for 2 years. Application of group-based multi-trajectory modeling discerned three subgroups based on the covariant trajectories of social anxiety and positive symptoms over 2 years.
Results
One of the subgroups showed sustained social anxiety over time despite moderate recovery in positive symptoms, while the other two showed recovery of social anxiety below clinically significant thresholds, along with modest to moderate recovery in positive symptom severity. The trajectory group with sustained social anxiety had poorer long-term global functional outcomes than the other trajectory groups. In addition, compared with the other two trajectory groups, membership in the group with sustained social anxiety was predicted by higher levels of polygenic risk for schizophrenia and environmental stress exposures.
Conclusions
Together, these analyses indicate differential relevance of sustained v. remitting social anxiety symptoms in the CHR-P population, which in turn may carry implications for differential intervention strategies.
Trichotillomania (TTM) and skin picking disorder (SPD) are common and often debilitating mental health conditions, grouped under the umbrella term of body-focused repetitive behaviors (BFRBs). Recent clinical subtyping found that there were three distinct subtypes of TTM and two of SPD. Whether these clinical subtypes map on to any unique neurobiological underpinnings, however, remains unknown.
Methods
Two hundred and fifty one adults [193 with a BFRB (85.5% [n = 165] female) and 58 healthy controls (77.6% [n = 45] female)] were recruited from the community for a multicenter between-group comparison using structural neuroimaging. Differences in whole brain structure were compared across the subtypes of BFRBs, controlling for age, sex, scanning site, and intracranial volume.
Results
When the subtypes of TTM were compared, low awareness hair pullers demonstrated increased cortical volume in the lateral occipital lobe relative to controls and sensory sensitive pullers. In addition, impulsive/perfectionist hair pullers showed relative decreased volume near the lingual gyrus of the inferior occipital–parietal lobe compared with controls.
Conclusions
These data indicate that the anatomical substrates of particular forms of BFRBs are dissociable, which may have implications for understanding clinical presentations and treatment response.
In recent years, a variety of efforts have been made in political science to enable, encourage, or require scholars to be more open and explicit about the bases of their empirical claims and, in turn, make those claims more readily evaluable by others. While qualitative scholars have long taken an interest in making their research open, reflexive, and systematic, the recent push for overarching transparency norms and requirements has provoked serious concern within qualitative research communities and raised fundamental questions about the meaning, value, costs, and intellectual relevance of transparency for qualitative inquiry. In this Perspectives Reflection, we crystallize the central findings of a three-year deliberative process—the Qualitative Transparency Deliberations (QTD)—involving hundreds of political scientists in a broad discussion of these issues. Following an overview of the process and the key insights that emerged, we present summaries of the QTD Working Groups’ final reports. Drawing on a series of public, online conversations that unfolded at www.qualtd.net, the reports unpack transparency’s promise, practicalities, risks, and limitations in relation to different qualitative methodologies, forms of evidence, and research contexts. Taken as a whole, these reports—the full versions of which can be found in the Supplementary Materials—offer practical guidance to scholars designing and implementing qualitative research, and to editors, reviewers, and funders seeking to develop criteria of evaluation that are appropriate—as understood by relevant research communities—to the forms of inquiry being assessed. We dedicate this Reflection to the memory of our coauthor and QTD working group leader Kendra Koivu.1
We evaluated the safety and feasibility of high-intensity interval training via a novel telemedicine ergometer (MedBIKE™) in children with Fontan physiology.
Methods:
The MedBIKE™ is a custom telemedicine ergometer, incorporating a video game platform and live feed of patient video/audio, electrocardiography, pulse oximetry, and power output, for remote medical supervision and modulation of work. There were three study phases: (I) exercise workload comparison between the MedBIKE™ and a standard cardiopulmonary exercise ergometer in 10 healthy adults. (II) In-hospital safety, feasibility, and user experience (via questionnaire) assessment of a MedBIKE™ high-intensity interval training protocol in children with Fontan physiology. (III) Eight-week home-based high-intensity interval trial programme in two participants with Fontan physiology.
Results:
There was good agreement in oxygen consumption during graded exercise at matched work rates between the cardiopulmonary exercise ergometer and MedBIKE™ (1.1 ± 0.5 L/minute versus 1.1 ± 0.5 L/minute, p = 0.44). Ten youth with Fontan physiology (11.5 ± 1.8 years old) completed a MedBIKE™ high-intensity interval training session with no adverse events. The participants found the MedBIKE™ to be enjoyable and easy to navigate. In two participants, the 8-week home-based protocol was tolerated well with completion of 23/24 (96%) and 24/24 (100%) of sessions, respectively, and no adverse events across the 47 sessions in total.
Conclusion:
The MedBIKE™ resulted in similar physiological responses as compared to a cardiopulmonary exercise test ergometer and the high-intensity interval training protocol was safe, feasible, and enjoyable in youth with Fontan physiology. A randomised-controlled trial of a home-based high-intensity interval training exercise intervention using the MedBIKE™ will next be undertaken.
Introduction: Workplace based assessments (WBAs) are integral to emergency medicine residency training. However many biases undermine their validity, such as an assessor's personal inclination to rate learners leniently or stringently. Outlier assessors produce assessment data that may not reflect the learner's performance. Our emergency department introduced a new Daily Encounter Card (DEC) using entrustability scales in June 2018. Entrustability scales reflect the degree of supervision required for a given task, and are shown to improve assessment reliability and discrimination. It is unclear what effect they will have on assessor stringency/leniency – we hypothesize that they will reduce the number of outlier assessors. We propose a novel, simple method to identify outlying assessors in the setting of WBAs. We also examine the effect of transitioning from a norm-based assessment to an entrustability scale on the population of outlier assessors. Methods: This was a prospective pre-/post-implementation study, including all DECs completed between July 2017 and June 2019 at The Ottawa Hospital Emergency Department. For each phase, we identified outlier assessors as follows: 1. An assessor is a potential outlier if the mean of the scores they awarded was more than two standard deviations away from the mean score of all completed assessments. 2. For each assessor identified in step 1, their learners’ assessment scores were compared to the overall mean of all learners. This ensures that the assessor was not simply awarding outlying scores due to working with outlier learners. Results: 3927 and 3860 assessments were completed by 99 and 116 assessors in the pre- and post-implementation phases respectively. We identified 9 vs 5 outlier assessors (p = 0.16) in the pre- and post-implementation phases. Of these, 6 vs 0 (p = 0.01) were stringent, while 3 vs 5 (p = 0.67) were lenient. One assessor was identified as an outlier (lenient) in both phases. Conclusion: Our proposed method successfully identified outlier assessors, and could be used to identify assessors who might benefit from targeted coaching and feedback on their assessments. The transition to an entrustability scale resulted in a non-significant trend towards fewer outlier assessors. Further work is needed to identify ways to mitigate the effects of rater cognitive biases.
Introduction: The Ottawa Emergency Department Shift Observation Tool (O-EDShOT) was recently developed to assess a resident's ability to safely run an ED shift and is supported by multiple sources of validity evidence. The O-EDShOT uses entrustability scales, which reflect the degree of supervision required for a given task. It was found to discriminate between learners of different levels, and to differentiate between residents who were rated as able to safely run the shift and those who were not. In June 2018 we replaced norm-based daily encounter cards (DECs) with the O-EDShOT. With the ideal assessment tool, most of the score variability would be explained by variability in learners’ performances. In reality, however, much of the observed variability is explained by other factors. The purpose of this study is to determine what proportion of total score variability is accounted for by learner variability when using norm-based DECs vs the O-EDShOT. Methods: This was a prospective pre-/post-implementation study, including all daily assessments completed between July 2017 and June 2019 at The Ottawa Hospital ED. A generalizability analysis (G study) was performed to determine what proportion of total score variability is accounted for by the various factors in this study (learner, rater, form, pgy level) for both the pre- and post- implementation phases. We collected 12 months of data for each phase, because we estimated that 6-12 months would be required to observe a measurable increase in entrustment scale scores within a learner. Results: A total of 3908 and 3679 assessments were completed by 99 and 116 assessors in the pre- and post- implementation phases respectively. Our G study revealed that 21% of total score variance was explained by a combination of post-graduate year (PGY) level and the individual learner in the pre-implementation phase, compared to 59% in the post-implementation phase. An average of 51 vs 27 forms/learner are required to achieve a reliability of 0.80 in the pre- and post-implementation phases respectively. Conclusion: A significantly greater proportion of total score variability is explained by variability in learners’ performances with the O-EDShOT compared to norm-based DECs. The O-EDShOT also requires fewer assessments to generate a reliable estimate of the learner's ability. This study suggests that the O-EDShOT is a more useful assessment tool than norm-based DECs, and could be adopted in other emergency medicine training programs.
Evidence suggests that early trauma may have a negative effect on cognitive functioning in individuals with psychosis, yet the relationship between childhood trauma and cognition among those at clinical high risk (CHR) for psychosis remains unexplored. Our sample consisted of 626 CHR children and 279 healthy controls who were recruited as part of the North American Prodrome Longitudinal Study 2. Childhood trauma up to the age of 16 (psychological, physical, and sexual abuse, emotional neglect, and bullying) was assessed by using the Childhood Trauma and Abuse Scale. Multiple domains of cognition were measured at baseline and at the time of psychosis conversion, using standardized assessments. In the CHR group, there was a trend for better performance in individuals who reported a history of multiple types of childhood trauma compared with those with no/one type of trauma (Cohen d = 0.16). A history of multiple trauma types was not associated with greater cognitive change in CHR converters over time. Our findings tentatively suggest there may be different mechanisms that lead to CHR states. Individuals who are at clinical high risk who have experienced multiple types of childhood trauma may have more typically developing premorbid cognitive functioning than those who reported minimal trauma do. Further research is needed to unravel the complexity of factors underlying the development of at-risk states.
Innovation Concept: The outcome of emergency medicine training is to produce physicians who can competently run an emergency department (ED) shift. While many workplace-based ED assessments focus on discrete tasks of the discipline, others emphasize assessment of performance across the entire shift. However, the quality of assessments is generally poor and these tools often lack validity evidence. The use of entrustment scale anchors may help to address these psychometric issues. The aim of this study was to develop and gather validity evidence for a novel tool to assess a resident's ability to independently run an ED shift. Methods: Through a nominal group technique, local and national stakeholders identified dimensions of performance reflective of a competent ED physician. These dimensions were included in a new tool that was piloted in the Department of Emergency Medicine at the University of Ottawa during a 4-month period. Psychometric characteristics of the items were calculated, and a generalizability analysis used to determine the reliability of scores. An ANOVA was conducted to determine whether scores increased as a function of training level (junior = PGY1-2, intermediate = PGY3, senior = PGY4-5), and varied by ED treatment area. Safety for independent practice was analyzed with a dichotomous score. Curriculum, Tool or Material: The developed Ottawa Emergency Department Shift Observation Tool (O-EDShOT) includes 12-items rated on a 5-point entrustment scale with a global assessment item and 2 short-answer questions. Eight hundred and thirty-three assessment were completed by 78 physicians for 45 residents. Mean scores differed significantly by training level (p < .001) with junior residents receiving lower ratings (3.48 ± 0.69) than intermediate residents who received lower ratings (3.98 ± 0.48) than senior residents (4.54 ± 0.42). Scores did not vary by ED treatment area (p > .05). Residents judged to be safe to independently run the shift had significantly higher mean scores than those judged not to be safe (4.74 ± 0.31 vs 3.75 ± 0.66; p < .001). Fourteen observations per resident, the typical number recorded during a 1-month rotation, were required to achieve a reliability of 0.80. Conclusion: The O-EDShOT successfully discriminated between junior, intermediate and senior-level residents regardless of ED treatment area. Multiple sources of evidence support the O-EDShOT producing valid scores for assessing a resident's ability to independently run an ED shift.
To evaluate long-term efficacy of deutetrabenazine in patients with tardive dyskinesia (TD) by examining response rates from baseline in Abnormal Involuntary Movement Scale (AIMS) scores. Preliminary results of the responder analysis are reported in this analysis.
Background
In the 12-week ARM-TD and AIM-TD studies, the odds of response to deutetrabenazine treatment were higher than the odds of response to placebo at all response levels, and there were low rates of overall adverse events and discontinuations associated with deutetrabenazine.
Method
Patients with TD who completed ARM-TD or AIM-TD were included in this open-label, single-arm extension study, in which all patients restarted/started deutetrabenazine 12mg/day, titrating up to a maximum total daily dose of 48mg/day based on dyskinesia control and tolerability. The study comprised a 6-week titration and a long-term maintenance phase. The cumulative proportion of AIMS responders from baseline was assessed. Response was defined as a percent improvement from baseline for each patient from 10% to 90% in 10% increments. AlMS score was assessed by local site ratings for this analysis.
Results
343 patients enrolled in the extension study (111 patients received placebo in the parent study and 232 patients received deutetrabenazine). At Week 54 (n=145; total daily dose [mean±standard error]: 38.1±0.9mg), 63% of patients receiving deutetrabenazine achieved ≥30% response, 48% of patients achieved ≥50% response, and 26% achieved ≥70% response. At Week 80 (n=66; total daily dose: 38.6±1.1mg), 76% of patients achieved ≥30% response, 59% of patients achieved ≥50% response, and 36% achieved ≥70% response. Treatment was generally well tolerated.
Conclusions
Patients who received long-term treatment with deutetrabenazine achieved response rates higher than those observed in positive short-term studies, indicating clinically meaningful long-term treatment benefit.
Presented at: American Academy of Neurology Annual Meeting; April 21–27, 2018, Los Angeles, California, USA.
Funding Acknowledgements: This study was supported by Teva Pharmaceuticals, Petach Tikva, Israel.
To evaluate the long-term safety and tolerability of deutetrabenazine in patients with tardive dyskinesia (TD) at 2years.
Background
In the 12-week ARM-TD and AIM-TD studies, deutetrabenazine showed clinically significant improvements in Abnormal Involuntary Movement Scale scores compared with placebo, and there were low rates of overall adverse events (AEs) and discontinuations associated with deutetrabenazine.
Method
Patients who completed ARM-TD or AIM-TD were included in this open-label, single-arm extension study, in which all patients restarted/started deutetrabenazine 12mg/day, titrating up to a maximum total daily dose of 48mg/day based on dyskinesia control and tolerability. The study comprised a 6-week titration period and a long-term maintenance phase. Safety measures included incidence of AEs, serious AEs (SAEs), and AEs leading to withdrawal, dose reduction, or dose suspension. Exposure-adjusted incidence rates (EAIRs; incidence/patient-years) were used to compare AE frequencies for long-term treatment with those for short-term treatment (ARM-TD and AIM-TD). This analysis reports results up to 2 years (Week106).
Results
343 patients were enrolled (111 patients received placebo in the parent study and 232 received deutetrabenazine). There were 331.4 patient-years of exposure in this analysis. Through Week 106, EAIRs of AEs were comparable to or lower than those observed with short-term deutetrabenazine and placebo, including AEs of interest (akathisia/restlessness [long-term EAIR: 0.02; short-term EAIR range: 0–0.25], anxiety [0.09; 0.13–0.21], depression [0.09; 0.04–0.13], diarrhea [0.06; 0.06–0.34], parkinsonism [0.01; 0–0.08], somnolence/sedation [0.09; 0.06–0.81], and suicidality [0.02; 0–0.13]). The frequency of SAEs (EAIR 0.15) was similar to those observed with short-term placebo (0.33) and deutetrabenazine (range 0.06–0.33) treatment. AEs leading to withdrawal (0.08), dose reduction (0.17), and dose suspension (0.06) were uncommon.
Conclusions
These results confirm the safety outcomes seen in the ARM-TD and AIM-TD parent studies, demonstrating that deutetrabenazine is well tolerated for long-term use in TD patients.
Presented at: American Academy of Neurology Annual Meeting; April 21–27, 2018, Los Angeles, California,USA
Funding Acknowledgements: Funding: This study was supported by Teva Pharmaceuticals, Petach Tikva, Israel
Childhood adversity is associated with poor mental and physical health outcomes across the life span. Alterations in the hypothalamic–pituitary–adrenal axis are considered a key mechanism underlying these associations, although findings have been mixed. These inconsistencies suggest that other aspects of stress processing may underlie variations in this these associations, and that differences in adversity type, sex, and age may be relevant. The current study investigated the relationship between childhood adversity, stress perception, and morning cortisol, and examined whether differences in adversity type (generalized vs. threat and deprivation), sex, and age had distinct effects on these associations. Salivary cortisol samples, daily hassle stress ratings, and retrospective measures of childhood adversity were collected from a large sample of youth at risk for serious mental illness including psychoses (n = 605, mean age = 19.3). Results indicated that childhood adversity was associated with increased stress perception, which subsequently predicted higher morning cortisol levels; however, these associations were specific to threat exposures in females. These findings highlight the role of stress perception in stress vulnerability following childhood adversity and highlight potential sex differences in the impact of threat exposures.
The role that vitamin D plays in pulmonary function remains uncertain. Epidemiological studies reported mixed findings for serum 25-hydroxyvitamin D (25(OH)D)–pulmonary function association. We conducted the largest cross-sectional meta-analysis of the 25(OH)D–pulmonary function association to date, based on nine European ancestry (EA) cohorts (n 22 838) and five African ancestry (AA) cohorts (n 4290) in the Cohorts for Heart and Aging Research in Genomic Epidemiology Consortium. Data were analysed using linear models by cohort and ancestry. Effect modification by smoking status (current/former/never) was tested. Results were combined using fixed-effects meta-analysis. Mean serum 25(OH)D was 68 (sd 29) nmol/l for EA and 49 (sd 21) nmol/l for AA. For each 1 nmol/l higher 25(OH)D, forced expiratory volume in the 1st second (FEV1) was higher by 1·1 ml in EA (95 % CI 0·9, 1·3; P<0·0001) and 1·8 ml (95 % CI 1·1, 2·5; P<0·0001) in AA (Prace difference=0·06), and forced vital capacity (FVC) was higher by 1·3 ml in EA (95 % CI 1·0, 1·6; P<0·0001) and 1·5 ml (95 % CI 0·8, 2·3; P=0·0001) in AA (Prace difference=0·56). Among EA, the 25(OH)D–FVC association was stronger in smokers: per 1 nmol/l higher 25(OH)D, FVC was higher by 1·7 ml (95 % CI 1·1, 2·3) for current smokers and 1·7 ml (95 % CI 1·2, 2·1) for former smokers, compared with 0·8 ml (95 % CI 0·4, 1·2) for never smokers. In summary, the 25(OH)D associations with FEV1 and FVC were positive in both ancestries. In EA, a stronger association was observed for smokers compared with never smokers, which supports the importance of vitamin D in vulnerable populations.
Much of the interest in youth at clinical high risk (CHR) of psychosis has been in understanding conversion. Recent literature has suggested that less than 25% of those who meet established criteria for being at CHR of psychosis go on to develop a psychotic illness. However, little is known about the outcome of those who do not make the transition to psychosis. The aim of this paper was to examine clinical symptoms and functioning in the second North American Prodrome Longitudinal Study (NAPLS 2) of those individuals whose by the end of 2 years in the study had not developed psychosis.
Methods
In NAPLS-2 278 CHR participants completed 2-year follow-ups and had not made the transition to psychosis. At 2-years the sample was divided into three groups – those whose symptoms were in remission, those who were still symptomatic and those whose symptoms had become more severe.
Results
There was no difference between those who remitted early in the study compared with those who remitted at one or 2 years. At 2-years, those in remission had fewer symptoms and improved functioning compared with the two symptomatic groups. However, all three groups had poorer social functioning and cognition than healthy controls.
Conclusions
A detailed examination of the clinical and functional outcomes of those who did not make the transition to psychosis did not contribute to predicting who may make the transition or who may have an earlier remission of attenuated psychotic symptoms.
We report daptomycin minimum inhibitory concentrations (MICs) for vancomycin-resistant Enterococcus faecium isolated from bloodstream infections over a 4-year period. The daptomycin MIC increased over time hospital-wide for initial isolates and increased over time within patients, culminating in 40% of patients having daptomycin-nonsusceptible isolates in the final year of the study.