We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Galaxy Zoo is an online project to classify morphological features in extra-galactic imaging surveys with public voting. In this paper, we compare the classifications made for two different surveys, the Dark Energy Spectroscopic Instrument (DESI) imaging survey and a part of the Kilo-Degree Survey (KiDS), in the equatorial fields of the Galaxy And Mass Assembly (GAMA) survey. Our aim is to cross-validate and compare the classifications based on different imaging quality and depth. We find that generally the voting agrees globally but with substantial scatter, that is, substantial differences for individual galaxies. There is a notable higher voting fraction in favour of ‘smooth’ galaxies in the DESI+zoobot classifications, most likely due to the difference between imaging depth. DESI imaging is shallower and slightly lower resolution than KiDS and the Galaxy Zoo images do not reveal details such as disc features and thus are missed in the zoobot training sample. We check against expert visual classifications and find good agreement with KiDS-based Galaxy Zoo voting. We reproduce the results from Porter-Temple+ (2022), on the dependence of stellar mass, star formation, and specific star formation on the number of spiral arms. This shows that once corrected for redshift, the DESI Galaxy Zoo and KiDS Galaxy Zoo classifications agree well on population properties. The zoobot cross-validation increases confidence in its ability to compliment Galaxy Zoo classifications and its ability for transfer learning across surveys.
The prevalence and subsequent physical health burden of alcohol use disorder is on the increase in almost all age groups in the UK and nearly 1 in 5 of the population will drink at hazardous levels. Those who drink heavily often have limited or patchy engagement with physical health services and improving this should be a focus of drug and alcohol services.
Our aim was to audit the proportion of clients attending the alcohol service at Lorraine Hewitt House (LHH) who had completed a fibroscan, to audit the outcomes of those fibroscans and to audit the outcomes of the onward referrals where they had been made.
Methods
Since starting the liver clinic, more than 100 fibroscans have been completed. These are typically offered to clients in the alcohol pathway and where it can be facilitated, they are done on the day, or otherwise booked in for scheduled appointments.
We audited the results of the scans, for liver stiffness and liver steatosis. Additionally, for those who had abnormal results requiring onward referral, we audited the outcomes of these referrals.
Outcomes and overall physical health were subsequently discussed with each patient who underwent a liver scan.
Results
A total of 100 fibroscans were audited. This represents approximately one third of the clients with alcohol as their primary problem substance at LHH.
Every client had the results of their scan explained and discussed with them and were given lifestyle advice and interventions.
A total of 37 scans (37%) had all their parameters within the normal range. 16 (16%) showed an increased liver stiffness of >10kPa and 15 people (15%) gave consent and were subsequently referred to our local liver clinic. 13% had stage 1 steatosis (238–260dB/M), 18% had stage 2 steatosis (260–290dB/M) and 29% had stage 3 steatosis (>290/dB/M).
Of the 15 referrals made to liver clinic, 10 (66.7%) attended their liver clinic follow up appointments and 9 of these clients are awaiting further interventions – the remaining 1 client has been discharged back to their GP.
Of the remaining 5 referrals made to liver clinic, 2 are awaiting appointment dates and 3 are pending triage.
Conclusion
The physical health of clients attending drug and addiction services is often complicated and in need of specific and targeted interventions. Liver health is particularly relevant to the alcohol client group and integrating fibroscans into drug and addiction services facilitates better engagement and early assessment and intervention.
Hard-to-treat childhood cancers are those where standard treatment options do not exist and the prognosis is poor. Healthcare professionals (HCPs) are responsible for communicating with families about prognosis and complex experimental treatments. We aimed to identify HCPs’ key challenges and skills required when communicating with families about hard-to-treat cancers and their perceptions of communication-related training.
Methods
We interviewed Australian HCPs who had direct responsibilities in managing children/adolescents with hard-to-treat cancer within the past 24 months. Interviews were analyzed using qualitative content analysis.
Results
We interviewed 10 oncologists, 7 nurses, and 3 social workers. HCPs identified several challenges for communication with families including: balancing information provision while maintaining realistic hope; managing their own uncertainty; and nurses and social workers being underutilized during conversations with families, despite widespread preferences for multidisciplinary teamwork. HCPs perceived that making themselves available to families, empowering them to ask questions, and repeating information helped to establish and maintain trusting relationships with families. Half the HCPs reported receiving no formal training for communicating prognosis and treatment options with families of children with hard-to-treat cancers. Nurses, social workers, and less experienced oncologists supported the development of communication training resources, more so than more experienced oncologists.
Significance of results
Resources are needed which support HCPs to communicate with families of children with hard-to-treat cancers. Such resources may be particularly beneficial for junior oncologists and other HCPs during their training, and they should aim to prepare them for common challenges and foster greater multidisciplinary collaboration.
To determine the association between blood markers of white matter injury (e.g., serum neurofilament light and phosphorylated neurofilament heavy) and a novel neuroimaging technique measuring microstructural white matter changes (e.g., diffusion kurtosis imaging) in regions (e.g., anterior thalamic radiation and uncinate fasciculus) known to be impacted in traumatic brain injury (TBI) and associated with symptoms common in those with chronic TBI (e.g., sleep disruption, cognitive and emotional disinhibition) in a heterogeneous sample of Veterans and non-Veterans with a history of remote TBI (i.e., >6 months).
Participants and Methods:
Participants with complete imaging and blood data (N=24) were sampled from a larger multisite study of chronic mild-moderate TBI. Participants ranged in age from young to middle-aged (mean age = 34.17, SD age = 10.96, range = 19-58) and primarily male (66.7%). The number of distinct TBIs ranged from 1-5 and the time since most recent TBI ranged from 0-30 years. Scores on a cognitive screener (MoCA) ranged from 22-30 (mean = 26.75). We performed bivariate correlations with mean kurtosis (MK) in the anterior thalamic radiation (ATR; left, right) uncinate fasciculus (UF; left, right), and serum neurofilament light (NFL), and phosphorylated neurofilament heavy (pNFH). Both were log transformed for non-normality. Significance threshold was set at p<0.05.
Results:
pNFH was significantly and negatively correlated to MK in the right (r=-0.446) and left (r=-0.599) UF and right (r=-0.531) and left (r=-0.469) ATR. NFL showed moderate associations with MK in the right (r=-0.345) and left (r=-0.361) UF and little to small association in the right (r=-0.063) and left (r=-0.215) ATR. In post-hoc analyses, MK in both the left (r=0.434) and right (r=0.514) UF was positively associated with performance on a frontally-mediated list-learning task (California Verbal Learning Test, 2nd Edition; Trials 1-5 total).
Conclusions:
Results suggest that serum pNFH may be a more sensitive blood marker of microstructural complexity in white matter regions frequently impacted by TBI in a chronic mild-moderate TBI sample. Further, it suggests that even years after a mild-moderate TBI, levels of pNFH may be informative regarding white matter integrity in regions related to executive functioning and emotional disinhibition, both of which are common presenting problems when these patients are seen in a clinical setting.
Close-range sensors are employed to observe glaciological processes that operate over short timescales (e.g. iceberg calving, glacial lake outburst floods, diurnal surface melting). However, under poor weather conditions optical instruments fail while the operation of radar systems below 17 GHz do not have sufficient angular resolution to map glacier surfaces in detail. This letter reviews the potential of millimetre-wave radar at 94 GHz to obtain high-resolution 3-D measurements of glaciers under most weather conditions. We discuss the theory of 94 GHz radar for glaciology studies, demonstrate its potential to map a glacier calving front and summarise future research priorities.
Autism and autistic traits are risk factors for suicidal behaviour.
Aims
To explore the prevalence of autism (diagnosed and undiagnosed) in those who died by suicide, and identify risk factors for suicide in this group.
Method
Stage 1: 372 coroners’ inquest records, covering the period 1 January 2014 to 31 December 2017 from two regions of England, were analysed for evidence that the person who died had diagnosed autism or undiagnosed possible autism (elevated autistic traits), and identified risk markers. Stage 2: 29 follow-up interviews with the next of kin of those who died gathered further evidence of autism and autistic traits using validated autism screening and diagnostic tools.
Results
Stage 1: evidence of autism (10.8%) was significantly higher in those who died by suicide than the 1.1% prevalence expected in the UK general alive population (odds ratio (OR) = 11.08, 95% CI 3.92–31.31). Stage 2: 5 (17.2%) of the follow-up sample had evidence of autism identified from the coroners’ records in stage 1. We identified evidence of undiagnosed possible autism in an additional 7 (24.1%) individuals, giving a total of 12 (41.4%); significantly higher than expected in the general alive population (1.1%) (OR = 19.76, 95% CI 2.36–165.84). Characteristics of those who died were largely similar regardless of evidence of autism, with groups experiencing a comparably high number of multiple risk markers before they died.
Conclusions
Elevated autistic traits are significantly over-represented in those who die by suicide.
Cognitive–behavioural therapy (CBT) is recommended for all patients with psychosis, but is offered to only a minority. This is attributable, in part, to the resource-intensive nature of CBT for psychosis. Responses have included the development of CBT for psychosis in brief and targeted formats, and its delivery by briefly trained therapists. This study explored a combination of these responses by investigating a brief, CBT-informed intervention targeted at distressing voices (the GiVE intervention) administered by a briefly trained workforce of assistant psychologists.
Aims
To explore the feasibility of conducting a randomised controlled trial to evaluate the clinical and cost-effectiveness of the GiVE intervention when delivered by assistant psychologists to patients with psychosis.
Method
This was a three-arm, feasibility, randomised controlled trial comparing the GiVE intervention, a supportive counselling intervention and treatment as usual, recruiting across two sites, with 1:1:1 allocation and blind post-treatment and follow-up assessments.
Results
Feasibility outcomes were favourable with regard to the recruitment and retention of participants and the adherence of assistant psychologists to therapy and supervision protocols. For the candidate primary outcomes, estimated effects were in favour of GiVE compared with supportive counselling and treatment as usual at post-treatment. At follow-up, estimated effects were in favour of supportive counselling compared with GiVE and treatment as usual, and GiVE compared with treatment as usual.
Conclusions
A definitive trial of the GiVE intervention, delivered by assistant psychologists, is feasible. Adaptations to the GiVE intervention and the design of any future trials may be necessary.
Retrospective self-report is typically used for diagnosing previous pediatric traumatic brain injury (TBI). A new semi-structured interview instrument (New Mexico Assessment of Pediatric TBI; NewMAP TBI) investigated test–retest reliability for TBI characteristics in both the TBI that qualified for study inclusion and for lifetime history of TBI.
Method:
One-hundred and eight-four mTBI (aged 8–18), 156 matched healthy controls (HC), and their parents completed the NewMAP TBI within 11 days (subacute; SA) and 4 months (early chronic; EC) of injury, with a subset returning at 1 year (late chronic; LC).
Results:
The test–retest reliability of common TBI characteristics [loss of consciousness (LOC), post-traumatic amnesia (PTA), retrograde amnesia, confusion/disorientation] and post-concussion symptoms (PCS) were examined across study visits. Aside from PTA, binary reporting (present/absent) for all TBI characteristics exhibited acceptable (≥0.60) test–retest reliability for both Qualifying and Remote TBIs across all three visits. In contrast, reliability for continuous data (exact duration) was generally unacceptable, with LOC and PCS meeting acceptable criteria at only half of the assessments. Transforming continuous self-report ratings into discrete categories based on injury severity resulted in acceptable reliability. Reliability was not strongly affected by the parent completing the NewMAP TBI.
Conclusions:
Categorical reporting of TBI characteristics in children and adolescents can aid clinicians in retrospectively obtaining reliable estimates of TBI severity up to a year post-injury. However, test–retest reliability is strongly impacted by the initial data distribution, selected statistical methods, and potentially by patient difficulty in distinguishing among conceptually similar medical concepts (i.e., PTA vs. confusion).
This study aimed to examine the predictors of cognitive performance in patients with pediatric mild traumatic brain injury (pmTBI) and to determine whether group differences in cognitive performance on a computerized test battery could be observed between pmTBI patients and healthy controls (HC) in the sub-acute (SA) and the early chronic (EC) phases of injury.
Method:
203 pmTBI patients recruited from emergency settings and 159 age- and sex-matched HC aged 8–18 rated their ongoing post-concussive symptoms (PCS) on the Post-Concussion Symptom Inventory and completed the Cogstate brief battery in the SA (1–11 days) phase of injury. A subset (156 pmTBI patients; 144 HC) completed testing in the EC (~4 months) phase.
Results:
Within the SA phase, a group difference was only observed for the visual learning task (One-Card Learning), with pmTBI patients being less accurate relative to HC. Follow-up analyses indicated higher ongoing PCS and higher 5P clinical risk scores were significant predictors of lower One-Card Learning accuracy within SA phase, while premorbid variables (estimates of intellectual functioning, parental education, and presence of learning disabilities or attention-deficit/hyperactivity disorder) were not.
Conclusions:
The absence of group differences at EC phase is supportive of cognitive recovery by 4 months post-injury. While the severity of ongoing PCS and the 5P score were better overall predictors of cognitive performance on the Cogstate at SA relative to premorbid variables, the full regression model explained only 4.1% of the variance, highlighting the need for future work on predictors of cognitive outcomes.
The great conducting teacher Hans Swarowsky told his students at Vienna’s Academy of Music and Performing Arts that a conductor has only three jobs: start the piece, make any changes within it, and finish it. Indeed, certain elements of time keeping would seem to render that task in conducting rather simple.
The criteria for objective memory impairment in mild cognitive impairment (MCI) are vaguely defined. Aggregating the number of abnormal memory scores (NAMS) is one way to operationalise memory impairment, which we hypothesised would predict progression to Alzheimer’s disease (AD) dementia.
Methods:
As part of the Australian Imaging, Biomarkers and Lifestyle Flagship Study of Ageing, 896 older adults who did not have dementia were administered a psychometric battery including three neuropsychological tests of memory, yielding 10 indices of memory. We calculated the number of memory scores corresponding to z ≤ −1.5 (i.e., NAMS) for each participant. Incident diagnosis of AD dementia was established by consensus of an expert panel after 3 years.
Results:
Of the 722 (80.6%) participants who were followed up, 54 (7.5%) developed AD dementia. There was a strong correlation between NAMS and probability of developing AD dementia (r = .91, p = .0003). Each abnormal memory score conferred an additional 9.8% risk of progressing to AD dementia. The area under the receiver operating characteristic curve for NAMS was 0.87 [95% confidence interval (CI) .81–.93, p < .01]. The odds ratio for NAMS was 1.67 (95% CI 1.40–2.01, p < .01) after correcting for age, sex, education, estimated intelligence quotient, subjective memory complaint, Mini-Mental State Exam (MMSE) score and apolipoprotein E ϵ4 status.
Conclusions:
Aggregation of abnormal memory scores may be a useful way of operationalising objective memory impairment, predicting incident AD dementia and providing prognostic stratification for individuals with MCI.
The U.S. led the world in environmental policy in the 1970s, but now lags behind comparable nations and resists joining others in tackling climate change. Two embedded, entwined, and exceptional American institutions—broad private property rights and competitive federalism—are necessary for explaining this shift. These two institutions shaped the exceptional stringency of 1970s American environmental laws and the powerful backlash against these laws that continues today. American colonies ensured broad private rights to use land and natural resources for profit. The colonies and the independent state governments that followed wielded expansive authority to govern this commodified environment. In the 1780s, Congress underwrote state governance of the privatized environment by directing the parceling and transfer of federal land to private parties and of environmental governance to future states. The 1787 Constitution cemented these relationships and exposed states to interstate economic competition. Environmental laws of the 1970s imposed unprecedented challenges to the environmental prerogatives long protected by these institutions, and the beneficiaries responded with a wide-ranging counterattack. Federalism enabled this opposition to build powerful regional alliances to stymie action on climate change. These overlooked institutional factors are necessary to explain why Canadian and American environmental policies have diverged.
Complex challenges may arise when patients present to emergency services with an advance decision to refuse life-saving treatment following suicidal behaviour.
Aims
To investigate the use of advance decisions to refuse treatment in the context of suicidal behaviour from the perspective of clinicians and people with lived experience of self-harm and/or psychiatric services.
Method
Forty-one participants aged 18 or over from hospital services (emergency departments, liaison psychiatry and ambulance services) and groups of individuals with experience of psychiatric services and/or self-harm were recruited to six focus groups in a multisite study in England. Data were collected in 2016 using a structured topic guide and included a fictional vignette. They were analysed using thematic framework analysis.
Results
Advance decisions to refuse treatment for suicidal behaviour were contentious across groups. Three main themes emerged from the data: (a) they may enhance patient autonomy and aid clarity in acute emergencies, but also create legal and ethical uncertainty over treatment following self-harm; (b) they are anxiety provoking for clinicians; and (c) in practice, there are challenges in validation (for example, validating the patient’s mental capacity at the time of writing), time constraints and significant legal/ethical complexities.
Conclusions
The potential for patients to refuse life-saving treatment following suicidal behaviour in a legal document was challenging and anxiety provoking for participants. Clinicians should act with caution given the potential for recovery and fluctuations in suicidal ideation. Currently, advance decisions to refuse treatment have questionable use in the context of suicidal behaviour given the challenges in validation. Discussion and further patient research are needed in this area.
Declaration of interest
D.G., K.H. and N.K. are members of the Department of Health's (England) National Suicide Prevention Advisory Group. N.K. chaired the National Institute for Health and Care Excellence (NICE) guideline development group for the longer-term management of self-harm and the NICE Topic Expert Group (which developed the quality standards for self-harm services). He is currently chair of the updated NICE guideline for Depression. K.H. and D.G. are NIHR Senior Investigators. K.H. is also supported by the Oxford Health NHS Foundation Trust and N.K. by the Greater Manchester Mental Health NHS Foundation Trust.
Objectives: Long-term neurological response to treatment after a severe traumatic brain injury (sTBI) is a dynamic process. Failure to capture individual heterogeneity in recovery may impact findings from single endpoint sTBI randomized controlled trials (RCT). The present study re-examined the efficacy of erythropoietin (Epo) and transfusion thresholds through longitudinal modeling of sTBI recovery as measured by the Disability Rating Scale (DRS). This study complements the report of primary outcomes in the Epo sTBI RCT, which failed to detect significant effects of acute treatment at 6 months post-injury. Methods: We implemented mixed effects models to characterize the recovery time-course and to examine treatment efficacy as a function of time post-injury and injury severity. Results: The inter-quartile range (25th–75th percentile) of DRS scores was 20–28 at week1; 8–24 at week 4; and 3–17 at 6 months. TBI severity group was found to significantly interact with Epo randomization group on mean DRS recovery curves. No significant differences in DRS recovery were found in transfusion threshold groups. Conclusions: This study demonstrated the value of taking a comprehensive view of recovery from sTBI in the Epo RCT as a temporally dynamic process that is shaped by both treatment and injury severity, and highlights the importance of the timing of primary outcome measurement. Effects of Epo treatment varied as a function of injury severity and time. Future studies are warranted to understand the possible moderating influence of injury severity on treatment effects pertaining to sTBI recovery. (JINS, 2019, 25, 293–301)
OBJECTIVES/SPECIFIC AIMS: To describe adverse behavioral symptoms attributed to droxidopa therapy for neurogenic orthostatic hypotension (nOH). METHODS/STUDY POPULATION: BACKGROUND: Droxidopa, a norepinephrine (NE) precursor, improves symptoms of nOH by replenishing NE levels. Central NE effects are poorly described but may offer potential benefits given the pathophysiologic progression of α-synuclein-related disorders. Here we report a series of cognitive and behavioral side effects linked to droxidopa therapy. METHODS: We identified 5 patients treated at Vanderbilt University who developed behavioral symptoms including mania, irritability, and disorientation shortly after the initiation of droxidopa for nOH. Comprehensive chart reviews were performed for all patients, including analysis of droxidopa titration schedule and dosing, medical comorbidities, clinical course, and outcome. All patients had symptoms of synucleinopathy, manifesting with autonomic failure, REM behavior disorder, and parkinsonism. Four met criteria for idiopathic PD, and one was diagnosed with pure autonomic failure but had concomitant symptoms of parkinsonism and REM sleep behavior disorder. RESULTS/ANTICIPATED RESULTS: Our patients had no significant cognitive or behavioral symptoms before the initiation of droxidopa. The average decrease in blood pressure upon standing was 27 mmHg systolic and 17 mmHg diastolic. Behavioral disturbances were observed early in the titration period and at relatively low doses of droxidopa (total daily doses ranging from 300 to 800 mg/day; droxidopa therapeutic dose range 900–1800 mg/d). The most common symptoms reported were mania, irritability, and confusion. Symptoms resolved with dose reduction in 4 patients, and droxidopa was discontinued in 1 patient due to persistent irritability. No other medical comorbidities or alternative etiologies were identified to explain these effects. DISCUSSION/SIGNIFICANCE OF IMPACT: Droxidopa is a prodrug designed to act peripherally, but may also have important, yet poorly described, central effects. We hypothesize that these behavioral manifestations result from an “overdose” of key NE networks linking orbitofrontal and mesolimbic regions. Further studies are warranted to better characterize central NE effects in patients treated with droxidopa.
The correlation between ATP concentration and bacterial burden in the patient care environment was assessed. These findings suggest that a correlation exists between ATP concentration and bacterial burden, and they generally support ATP technology manufacturer-recommended cutoff values. Despite relatively modest discriminative ability, this technology may serve as a useful proxy for cleanliness.