We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Objectives: Activities that require active thinking, like occupations, may influence cognitive function and its change over time. Associations between retirement and dementia risk have been reported, however the role of retirement age in these associations is unclear. We assessed associations of occupation and retirement age with cognitive decline in the US community-based Atherosclerosis Risk in Communities (ARIC)cohort.
Methods: We included 14,090 ARIC participants, followed for changes in cognition during up to 21 years. Information on current or most recent occupation was collected at ARIC baseline (1987–1989; participants aged 45–64 years) and categorized according to the 1980 US Census protocols and the Nam-Powers-Boyd occupational status score. Follow-up data on retirement was collected during 1999–2007 and classified as retired versus not retired at age 70. Trajectories of global cognitive factor scores from ARIC visit 2 (1990–1992) to visit 5 (2011–2013) were presented, and associations with occupation and age at retirement were studied using generalized estimating equation models, stratified by race and sex, and adjusted for demographics andcomorbidities.
Results: Mean age (SD) at first cognitive assessment was 57.0 (5.72) years. Higher occupational status and white- collar occupations were significantly associated with higher cognitive function at baseline. Occupation was associated with cognitive decline over 21 years only in women, and the direction of the effect on cognitive function differed between black and white women: in white women, the decline in cognitive function was greater in homemakers and low status occupations, whereas in black women, less decline was found in homemakers and low (compared to high) occupational status. Interestingly, retirement on or before age 70 was associated with less 21-year cognitive decline in all race-sex strata, except for blackwomen.
Conclusions: Associations between occupation, retirement age and cognitive function substantially differed by race and sex. Further research should explore reasons for the observed associations and race-sex differences.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
The Stricker Learning Span (SLS) is a computer-adaptive word list memory test specifically designed for remote assessment and self-administration on a web-based multi-device platform (Mayo Test Drive). Given recent evidence suggesting the prominence of learning impairment in preclinical Alzheimer’s disease (AD), the SLS places greater emphasis on learning than delayed memory compared to traditional word list memory tests (see Stricker et al., Neuropsychology in press for review and test details). The primary study aim was to establish criterion validity of the SLS by comparing the ability of the remotely-administered SLS and inperson administered Rey Auditory Verbal Learning Test (AVLT) to differentiate biomarkerdefined groups in cognitively unimpaired (CU) individuals on the Alzheimer’s continuum.
Participants and Methods:
Mayo Clinic Study of Aging CU participants (N=319; mean age=71, SD=11; mean education=16, SD=2; 47% female) completed a brief remote cognitive assessment (∼0.5 months from in-person visit). Brain amyloid and brain tau PET scans were available within 3 years. Overlapping groups were formed for 1) those on the Alzheimer’s disease (AD) continuum (A+, n=110) or not (A-, n=209), and for 2) those with biological AD (A+T+, n=43) vs no evidence of AD pathology (A-T-, n=181). Primary neuropsychological outcome variables were sum of trials for both the SLS and AVLT. Secondary outcome variables examined comparability of learning (1-5 total) and delay performances. Linear model ANOVAs were used to investigate biomarker subgroup differences and Hedge’s G effect sizes were derived, with and without adjusting for demographic variables (age, education, sex).
Results:
Both SLS and AVLT performances were worse in the biomarker positive relative to biomarker negative groups (unadjusted p’s<.05). Because biomarker positive groups were significantly older than biomarker negative groups, group differences were attenuated after adjusting for demographic variables, but SLS remained significant for A+ vs A- and for A+T+ vs A-T- comparisons (adjusted p’s<.05) and AVLT approached significance (p’s .05-.10). The effect sizes for the SLS were slightly better (qualitatively, no statistical comparison) for separating biomarker-defined CU groups in comparison to AVLT. For A+ vs A- and A+T+ vs A-T- comparisons, unadjusted effect sizes for SLS were -0.53 and -0.81 and for AVLT were -0.47 and -0.61, respectively; adjusted effect sizes for SLS were -0.25 and -0.42 and for AVLT were -0.19 and -0.26, respectively. In secondary analyses, learning and delay variables were similar in terms of ability to separate biomarker groups. For example, unadjusted effect sizes for SLS learning (-.80) was similar to SLS delay (.76), and AVLT learning (-.58) was similar to AVLT 30-minute delay (-.55) for the A+T+ vs AT- comparison.
Conclusions:
Remotely administered SLS performed similarly to the in-person-administered AVLT in its ability to separate biomarker-defined groups in CU individuals, providing evidence of criterion validity. The SLS showed significantly worse performance in A+ and A+T+ groups (relative to A- and A-T-groups) in this CU sample after demographic adjustment, suggesting potential sensitivity to detecting transitional cognitive decline in preclinical AD. Measures emphasizing learning should be given equal consideration as measures of delayed memory in AD-focused studies, particularly in the preclinical phase.
With persistent incidence, incomplete vaccination rates, confounding respiratory illnesses, and few therapeutic interventions available, COVID-19 continues to be a burden on the pediatric population. During a surge, it is difficult for hospitals to direct limited healthcare resources effectively. While the overwhelming majority of pediatric infections are mild, there have been life-threatening exceptions that illuminated the need to proactively identify pediatric patients at risk of severe COVID-19 and other respiratory infectious diseases. However, a nationwide capability for developing validated computational tools to identify pediatric patients at risk using real-world data does not exist.
Methods:
HHS ASPR BARDA sought, through the power of competition in a challenge, to create computational models to address two clinically important questions using the National COVID Cohort Collaborative: (1) Of pediatric patients who test positive for COVID-19 in an outpatient setting, who are at risk for hospitalization? (2) Of pediatric patients who test positive for COVID-19 and are hospitalized, who are at risk for needing mechanical ventilation or cardiovascular interventions?
Results:
This challenge was the first, multi-agency, coordinated computational challenge carried out by the federal government as a response to a public health emergency. Fifty-five computational models were evaluated across both tasks and two winners and three honorable mentions were selected.
Conclusion:
This challenge serves as a framework for how the government, research communities, and large data repositories can be brought together to source solutions when resources are strapped during a pandemic.
The Stricker Learning Span (SLS) is a computer-adaptive digital word list memory test specifically designed for remote assessment and self-administration on a web-based multi-device platform (Mayo Test Drive). We aimed to establish criterion validity of the SLS by comparing its ability to differentiate biomarker-defined groups to the person-administered Rey’s Auditory Verbal Learning Test (AVLT).
Method:
Participants (N = 353; mean age = 71, SD = 11; 93% cognitively unimpaired [CU]) completed the AVLT during an in-person visit, the SLS remotely (within 3 months) and had brain amyloid and tau PET scans available (within 3 years). Overlapping groups were formed for 1) those on the Alzheimer’s disease (AD) continuum (amyloid PET positive, A+, n = 125) or not (A-, n = 228), and those with biological AD (amyloid and tau PET positive, A+T+, n = 55) vs no evidence of AD pathology (A−T−, n = 195). Analyses were repeated among CU participants only.
Results:
The SLS and AVLT showed similar ability to differentiate biomarker-defined groups when comparing AUROCs (p’s > .05). In logistic regression models, SLS contributed significantly to predicting biomarker group beyond age, education, and sex, including when limited to CU participants. Medium (A− vs A+) to large (A−T− vs A+T+) unadjusted effect sizes were observed for both SLS and AVLT. Learning and delay variables were similar in terms of ability to separate biomarker groups.
Conclusions:
Remotely administered SLS performed similarly to in-person-administered AVLT in its ability to separate biomarker-defined groups, providing evidence of criterion validity. Results suggest the SLS may be sensitive to detecting subtle objective cognitive decline in preclinical AD.
Ethnic disparities in treatment with clozapine, the antipsychotic recommended for treatment-resistant schizophrenia (TRS), have been reported. However, these investigations frequently suffer from potential residual confounding. For example, few studies have restricted the analyses to TRS samples and none has controlled for benign ethnic neutropenia.
Objectives
This study investigated if service-users’ ethnicity influenced clozapine prescription in a cohort of people with TRS.
Methods
Information from the clinical records of South London and Maudsley NHS Trust was used to identify a cohort of service-users with TRS between 2007 and 2017. In this cohort, we used logistic regression to investigate any association between ethnicity and clozapine prescription while adjusting for potential confounding variables, including sociodemographic factors, psychiatric multimorbidity, substance use, benign ethnic neutropenia, and inpatient and outpatient care received.
Results
We identified 2239 cases that met the criteria for TRS. Results show that after adjusting for confounding variables, people with Black African ethnicity had half the odds of being treated with clozapine and people with Black Caribbean or Other Black background had about two-thirds the odds of being treated with clozapine compared White British service-users. No disparities were observed regarding other ethnic groups, namely Other White background, South Asian, Other Asian, or any other ethnicity.
Conclusions
There was evidence of inequities in care among Black ethnic groups with TRS. Interventions targeting barriers in access to healthcare are recommended.
Disclosure
During the conduction of the study, DFdF, GKS, and RH received funds from the NIHR Maudsley Biomedical Research Centre. For other activities outside the submitted work, DFdF received research funding from the UK Department of Health and Social Care, Janss
Early in the COVID-19 pandemic, the World Health Organization stressed the importance of daily clinical assessments of infected patients, yet current approaches frequently consider cross-sectional timepoints, cumulative summary measures, or time-to-event analyses. Statistical methods are available that make use of the rich information content of longitudinal assessments. We demonstrate the use of a multistate transition model to assess the dynamic nature of COVID-19-associated critical illness using daily evaluations of COVID-19 patients from 9 academic hospitals. We describe the accessibility and utility of methods that consider the clinical trajectory of critically ill COVID-19 patients.
Background: The impact of cervical dystonia (CD) severity on presentation subtype and onabotulinumtoxinA utilization was examined in the completer population from CD PROBE (CD Patient Registry for Observation of BOTOX® Efficacy). Methods: In this multicenter, prospective, observational registry, patients with CD were treated with onabotulinumtoxinA according to injectors’ standard of care. Completers were patients that completed all 3 treatment sessions and had accompanying data. Results: Of N=1046 patients enrolled, n=350 were completers. Completers were on average 57.3 years old, 74.9% female, 94.6% white, and 60.6% toxin-naïve. Baseline severity was mild in 32.6%, moderate in 54.3%, and severe in 13.1%. Torticollis was the most common presentation at baseline (mild: 44.7%, moderate: 55.8%, severe: 63.0%), followed by laterocollis (mild: 42.1%, moderate: 32.6%, severe: 26.1%). Median onabotulinumtoxinA dose increased over time; 160U–200U for torticollis and 170U–200U for laterocollis. For all severities, median total dose increased from injection 1 to injection 3 (mild: 138U–165U, moderate: 183U–200U, severe: 200U–285U). Eighty-one patients (23.1%) reported 139 treatment-related adverse events. There were no treatment-related serious adverse eventsand no new safety signals. Conclusions: CD severity impacted presentation subtype frequency and onabotulinumtoxinA utilization in CD PROBE, with higher and tailored dosing observed over time and with increasing disease severity.
Little data exist on provider perspectives about counselling and shared decision-making for complex CHD, ways to support and improve the process, and barriers to effective communication. The goal of this qualitative study was to determine providers’ perspectives regarding factors that are integral to shared decision-making with parents faced with complex CHD in their fetus or newborn; and barriers and facilitators to engaging in effective shared decision-making.
Methods:
We conducted semi-structured interviews with providers from different areas of practice who care for fetuses and/or children with CHD. Providers were recruited from four geographically diverse centres. Interviews were recorded, transcribed, and analysed for key themes using an open coding process with a grounded theory approach.
Results:
Interviews were conducted with 31 providers; paediatric cardiologists (n = 7) were the largest group represented, followed by nurses (n = 6) and palliative care providers (n = 5). Key barriers to communication with parents that providers identified included variability among providers themselves, factors that influenced parental comprehension or understanding, discrepant expectations, circumstantial barriers, and trust/relationship with providers. When discussing informational needs of parents, providers focused on comprehensive short- and long-term outcomes, quality of life, and breadth and depth that aligned with parental goals and needs. In discussing resources to support shared decision-making, providers emphasised the need for comprehensive, up-to-date information that was accessible to parents of varying situations and backgrounds.
Conclusions:
Provider perspectives on decision-making with families with CHD highlighted key communication issues, informational priorities, and components of decision support that can enhance shared decision-making.
Parents who receive a diagnosis of a severe, life-threatening CHD for their foetus or neonate face a complex and stressful decision between termination, palliative care, or surgery. Understanding how parents make this initial treatment decision is critical for developing interventions to improve counselling for these families.
Methods:
We conducted focus groups in four academic medical centres across the United States of America with a purposive sample of parents who chose termination, palliative care, or surgery for their foetus or neonate diagnosed with severe CHD.
Results:
Ten focus groups were conducted with 56 parents (Mage = 34 years; 80% female; 89% White). Results were constructed around three domains: decision-making approaches; values and beliefs; and decision-making challenges. Parents discussed varying approaches to making the decision, ranging from relying on their “gut feeling” to desiring statistics and probabilities. Religious and spiritual beliefs often guided the decision to not terminate the pregnancy. Quality of life was an important consideration, including how each option would impact the child (e.g., pain or discomfort, cognitive and physical abilities) and their family (e.g., care for other children, marriage, and career). Parents reported inconsistent communication of options by clinicians and challenges related to time constraints for making a decision and difficulty in processing information when distressed.
Conclusion:
This study offers important insights that can be used to design interventions to improve decision support and family-centred care in clinical practice.
Anticholinergic medications block cholinergic transmission. The central effects of anticholinergic drugs can be particularly marked in patients with dementia. Furthermore, anticholinergics antagonise the effects of cholinesterase inhibitors, the main dementia treatment.
Objectives
This study aimed to assess anticholinergic drug prescribing among dementia patients before and after admission to UK acute hospitals.
Methods
352 patients with dementia were included from 17 hospitals in the UK. All were admitted to surgical, medical or Care of the Elderly wards in 2019. Information about patients’ prescriptions were recorded on a standardised form. An evidence-based online calculator was used to calculate the anticholinergic drug burden of each patient. The correlation between two subgroups upon admission and discharge was tested with Spearman’s Rank Correlation.
Results
Table 1 shows patient demographics. On admission, 37.8% of patients had an anticholinergic burden score ≥1 and 5.68% ≥3. At discharge, 43.2% of patients had an anticholinergic burden score ≥1 and 9.1% ≥3. The increase was statistically significant (rho 0.688; p=2.2x10-16). The most common group of anticholinergic medications prescribed at discharge were psychotropics (see Figure 1). Among patients prescribed cholinesterase inhibitors, 44.9% were also taking anticholinergic medications.
Conclusions
This multicentre cross-sectional study found that people with dementia are frequently prescribed anticholinergic drugs, even if also taking cholinesterase inhibitors, and are significantly more likely to be discharged with a higher anticholinergic drug burden than on admission to hospital.
Conflict of interest
This project was planned and executed by the authors on behalf of SPARC (Student Psychiatry Audit and Research Collaborative). We thank the National Student Association of Medical Research for allowing us use of the Enketo platform. Judith Harrison was su
Remote consultation technology has been rapidly adopted due to the COVID-19 pandemic. However, some healthcare settings have faced barriers in implementation. We present a study to investigate changes in rates of remote consultation during the pandemic using a large electronic health record (EHR) dataset.
Methods
The Clinical Record Interactive Search tool (CRIS) was used to examine de-identified EHR data of people receiving mental healthcare in South London, UK. Data from around 37,500 patients were analysed for each week from 7th January 2019 and 20th September 2020 using linear regression and locally estimated scatterplot smoothing (LOESS) to investigate changes in the number of clinical contacts (in-person, remote or non-attended) with mental healthcare professionals and prescribing of antipsychotics and mood stabilisers. The data are presented in an interactive dashboard: http://rpatel.co.uk/TelepsychiatryDashboard.
Results
The frequency of in-person contacts was substantially reduced following the onset of the pandemic (β coefficient: -5829.6 contacts, 95% CI -6919.5 to -4739.6, p<0.001), while the frequency of remote contacts increased significantly (β coefficient: 3338.5 contacts, 95% CI 3074.4 to 3602.7, p<0.001). Rates of remote consultation were lower in older adults than in working age adults, children and adolescents. Despite the increase in remote contact, antipsychotic and mood stabiliser prescribing remained at similar levels.
Conclusions
The COVID-19 pandemic has been associated with a marked increase in remote consultation, particularly among younger patients. However, there was no evidence that this has led to changes in prescribing. Further work is needed to support older patients in accessing remote mental healthcare.
Disclosure
All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: RS has received funding from Janssen, GSK and Takeda outside the submitted work. RP has received funding from Janssen, Induction Healthcare and H
To explore best practices and challenges in providing school meals during COVID-19 in a low-income, predominantly Latino, urban–rural region.
Design:
Semi-structured interviews with school district stakeholders and focus groups with parents were conducted to explore school meal provision during COVID-19 from June to August 2020. Data were coded and themes were identified to guide analysis. Community organisations were involved in all aspects of study design, recruitment, data collection and analysis.
Setting:
Six school districts in California’s San Joaquin Valley.
Participants:
School district stakeholders (n 11) included food service directors, school superintendents and community partners (e.g. funders, food cooperative). Focus groups (n 6) were comprised of parents (n 29) of children participating in school meal programmes.
Results:
COVID-19-related challenges for districts included developing safe meal distribution systems, boosting low participation, covering COVID-19-related costs and staying informed of policy changes. Barriers for families included transportation difficulties, safety concerns and a lack of fresh foods. Innovative strategies to address obstacles included pandemic-electronic benefits transfer (EBT), bus-stop delivery, community pick-up locations, batched meals and leveraging partner resources.
Conclusions:
A focus on fresher, more appealing meals and greater communication between school officials and parents could boost participation. Districts that leveraged external partnerships were better equipped to provide meals during pandemic conditions. In addition, policies increasing access to fresh foods and capitalising on United States Department of Agriculture waivers could boost school meal participation. Finally, partnering with community organisations and acting upon parent feedback could improve school meal systems, and in combination with pandemic-EBT, address childhood food insecurity.
The coronavirus disease 2019 (COVID-19) pandemic has resulted in shortages of personal protective equipment (PPE), underscoring the urgent need for simple, efficient, and inexpensive methods to decontaminate masks and respirators exposed to severe acute respiratory coronavirus virus 2 (SARS-CoV-2). We hypothesized that methylene blue (MB) photochemical treatment, which has various clinical applications, could decontaminate PPE contaminated with coronavirus.
Design:
The 2 arms of the study included (1) PPE inoculation with coronaviruses followed by MB with light (MBL) decontamination treatment and (2) PPE treatment with MBL for 5 cycles of decontamination to determine maintenance of PPE performance.
Methods:
MBL treatment was used to inactivate coronaviruses on 3 N95 filtering facepiece respirator (FFR) and 2 medical mask models. We inoculated FFR and medical mask materials with 3 coronaviruses, including SARS-CoV-2, and we treated them with 10 µM MB and exposed them to 50,000 lux of white light or 12,500 lux of red light for 30 minutes. In parallel, integrity was assessed after 5 cycles of decontamination using multiple US and international test methods, and the process was compared with the FDA-authorized vaporized hydrogen peroxide plus ozone (VHP+O3) decontamination method.
Results:
Overall, MBL robustly and consistently inactivated all 3 coronaviruses with 99.8% to >99.9% virus inactivation across all FFRs and medical masks tested. FFR and medical mask integrity was maintained after 5 cycles of MBL treatment, whereas 1 FFR model failed after 5 cycles of VHP+O3.
Conclusions:
MBL treatment decontaminated respirators and masks by inactivating 3 tested coronaviruses without compromising integrity through 5 cycles of decontamination. MBL decontamination is effective, is low cost, and does not require specialized equipment, making it applicable in low- to high-resource settings.
There is a high rate of psychiatric comorbidity in patients with epilepsy. However, the impact of surgical treatment of refractory epilepsy on psychopathology remains under investigation. We aimed to examine the impact of epilepsy surgery on psychopathology and quality of life at 1-year post-surgery in a population of patients with epilepsy refractory to medication.
Methods:
This study initially assessed 48 patients with refractory epilepsy using the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I), the Hospital Anxiety and Depression Scale (HADS) and the Quality of Life in Epilepsy Inventory 89 (QOLIE-89) on admission to an Epilepsy Monitoring Unit (EMU) as part of their pre-surgical assessment. These patients were again assessed using the SCID-I, QOLIE-89 and HADS at 1-year follow-up post-surgery.
Results:
There was a significant reduction in psychopathology, particularly psychosis, following surgery at 1-year follow-up (p < 0.021). There were no new cases of de novo psychosis and surgery was also associated with a significant improvement in the quality of life scores (p < 0.001).
Conclusions:
This study demonstrates the impact of epilepsy surgery on psychopathology and quality of life in a patient population with refractory surgery. The presence of a psychiatric illness should not be a barrier to access surgical treatment.
How neighbourhood characteristics affect the physical safety of people with mental illness is unclear.
Aims
To examine neighbourhood effects on physical victimisation towards people using mental health services.
Method
We developed and evaluated a machine-learning-derived free-text-based natural language processing (NLP) algorithm to ascertain clinical text referring to physical victimisation. This was applied to records on all patients attending National Health Service mental health services in Southeast London. Sociodemographic and clinical data, and diagnostic information on use of acute hospital care (from Hospital Episode Statistics, linked to Clinical Record Interactive Search), were collected in this group, defined as ‘cases’ and concurrently sampled controls. Multilevel logistic regression models estimated associations (odds ratios, ORs) between neighbourhood-level fragmentation, crime, income deprivation, and population density and physical victimisation.
Results
Based on a human-rated gold standard, the NLP algorithm had a positive predictive value of 0.92 and sensitivity of 0.98 for (clinically recorded) physical victimisation. A 1 s.d. increase in neighbourhood crime was accompanied by a 7% increase in odds of physical victimisation in women and an 13% increase in men (adjusted OR (aOR) for women: 1.07, 95% CI 1.01–1.14, aOR for men: 1.13, 95% CI 1.06–1.21, P for gender interaction, 0.218). Although small, adjusted associations for neighbourhood fragmentation appeared greater in magnitude for women (aOR = 1.05, 95% CI 1.01–1.11) than men, where this association was not statistically significant (aOR = 1.00, 95% CI 0.95–1.04, P for gender interaction, 0.096). Neighbourhood income deprivation was associated with victimisation in men and women with similar magnitudes of association.
Conclusions
Neighbourhood factors influencing safety, as well as individual characteristics including gender, may be relevant to understanding pathways to physical victimisation towards people with mental illness.
Impairments in social cognition contribute significantly to disability in schizophrenia patients (SzP). Perception of facial expressions is critical for social cognition. Intact perception requires an individual to visually scan a complex dynamic social scene for transiently moving facial expressions that may be relevant for understanding the scene. The relationship of visual scanning for these facial expressions and social cognition remains unknown.
Methods
In 39 SzP and 27 healthy controls (HC), we used eye-tracking to examine the relationship between performance on The Awareness of Social Inference Test (TASIT), which tests social cognition using naturalistic video clips of social situations, and visual scanning, measuring each individual's relative to the mean of HC. We then examined the relationship of visual scanning to the specific visual features (motion, contrast, luminance, faces) within the video clips.
Results
TASIT performance was significantly impaired in SzP for trials involving sarcasm (p < 10−5). Visual scanning was significantly more variable in SzP than HC (p < 10−6), and predicted TASIT performance in HC (p = 0.02) but not SzP (p = 0.91), differing significantly between groups (p = 0.04). During the visual scanning, SzP were less likely to be viewing faces (p = 0.0001) and less likely to saccade to facial motion in peripheral vision (p = 0.008).
Conclusions
SzP show highly significant deficits in the use of visual scanning of naturalistic social scenes to inform social cognition. Alterations in visual scanning patterns may originate from impaired processing of facial motion within peripheral vision. Overall, these results highlight the utility of naturalistic stimuli in the study of social cognition deficits in schizophrenia.
The symptoms of bipolar disorder are sometimes misrecognised for unipolar depression and inappropriately treated with antidepressants. This may be associated with increased risk of developing mania. However, the extent to which this depends on what type of antidepressant is prescribed remains unclear.
Aims
To investigate the association between different classes of antidepressants and subsequent onset of mania/bipolar disorder in a real-world clinical setting.
Methods
Data on prior antidepressant therapy were extracted from 21,012 adults with unipolar depression receiving care from the South London and Maudsley NHS Foundation Trust (SLaM). multivariable Cox regression analysis (with age and gender as covariates) was used to investigate the association of antidepressant therapy with risk of developing mania/bipolar disorder.
Results
In total, 91,110 person-years of follow-up data were analysed (mean follow-up: 4.3 years). The overall incidence rate of mania/bipolar disorder was 10.9 per 1000 person-years. The peak incidence of mania/bipolar disorder was seen in patients aged between 26 and 35 years (12.3 per 1000 person-years). The most frequently prescribed antidepressants were SSRIs (35.5%), mirtazapine (9.4%), venlafaxine (5.6%) and TCAs (4.7%). Prior antidepressant treatment was associated with an increased incidence of mania/bipolar disorder ranging from 13.1 to 19.1 per 1000 person-years. Multivariable analysis indicated a significant association with SSRIs (hazard ratio 1.34, 95% CI 1.18–1.52) and venlafaxine (1.35, 1.07–1.70).
Conclusions
In people with unipolar depression, antidepressant treatment is associated with an increased risk of subsequent mania/bipolar disorder. These findings highlight the importance of considering risk factors for mania when treating people with depression.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Mood instability is an important problem but has received relatively little research attention. Natural language processing (NLP) is a novel method, which can used to automatically extract clinical data from electronic health records (EHRs).
Aims
To extract mood instability data from EHRs and investigate its impact on people with mental health disorders.
Methods
Data on mood instability were extracted using NLP from 27,704 adults receiving care from the South London and Maudsley NHS Foundation Trust (SLaM) for affective, personality or psychotic disorders. These data were used to investigate the association of mood instability with different mental disorders and with hospitalisation and treatment outcomes.
Results
Mood instability was documented in 12.1% of people included in the study. It was most frequently documented in people with bipolar disorder (22.6%), but was also common in personality disorder (17.8%) and schizophrenia (15.5%). It was associated with a greater number of days spent in hospital (B coefficient 18.5, 95% CI 12.1–24.8), greater frequency of hospitalisation (incidence rate ratio 1.95, 1.75–2.17), and an increased likelihood of prescription of antipsychotics (2.03, 1.75–2.35).
Conclusions
Using NLP, it was possible to identify mood instability in a large number of people, which would otherwise not have been possible by manually reading clinical records. Mood instability occurs in a wide range of mental disorders. It is generally associated with poor clinical outcomes. These findings suggest that clinicians should screen for mood instability across all common mental health disorders. The data also highlight the utility of NLP for clinical research.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
There are often substantial delays before diagnosis and initiation of treatment in people bipolar disorder. Increased delays are a source of considerable morbidity among affected individuals.
Aims
To investigate the factors associated with delays to diagnosis and treatment in people with bipolar disorder.
Methods
Retrospective cohort study using electronic health record data from the South London and Maudsley NHS Foundation Trust (SLaM) from 1364 adults diagnosed with bipolar disorder. The following predictor variables were analysed in a multivariable Cox regression analysis on diagnostic delay and treatment delay from first presentation to SLaM: age, gender, ethnicity, compulsory admission to hospital under the UK Mental Health Act, marital status and other diagnoses prior to bipolar disorder.
Results
The median diagnostic delay was 62 days (interquartile range: 17–243) and median treatment delay was 31 days (4–122). Compulsory hospital admission was associated with a significant reduction in both diagnostic delay (hazard ratio 2.58, 95% CI 2.18–3.06) and treatment delay (4.40, 3.63–5.62). Prior diagnoses of other psychiatric disorders were associated with increased diagnostic delay, particularly alcohol (0.48, 0.33–0.41) and substance misuse disorders (0.44, 0.31–0.61). Prior diagnosis of schizophrenia and psychotic depression were associated with reduced treatment delay.
Conclusions
Some individuals experience a significant delay in diagnosis and treatment of bipolar disorder, particularly those with alcohol/substance misuse disorders. These findings highlight a need to better identify the symptoms of bipolar disorder and offer appropriate treatment sooner in order to facilitate improved clinical outcomes. This may include the development of specialist early intervention services.
Disclosure of interest
The authors have not supplied their declaration of competing interest.