We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Functional cognitive disorder is an increasingly recognised subtype of functional neurological disorder for which treatment options are currently limited. We have developed a brief online group acceptance and commitment therapy (ACT)-based intervention.
Aims
To assess the feasibility of conducting a randomised controlled trial of this intervention versus treatment as usual (TAU).
Method
The study was a parallel-group, single-blind randomised controlled trial, with participants recruited from cognitive neurology, neuropsychiatry and memory clinics in London. Participants were randomised into two groups: ACT + TAU or TAU alone. Feasibility was assessed on the basis of recruitment and retention rates, the acceptability of the intervention, and signal of efficacy on the primary outcome measure (Acceptance and Action Questionnaire II (AAQ-II)) score, although the study was not powered to demonstrate this statistically. Outcome measures were collected at baseline and at 2, 4 and 6 months post-intervention, including assessments of quality of life, memory, anxiety, depression and healthcare use.
Results
We randomised 44 participants, with a participation rate of 51.1% (95% CI 40.8–61.5%); 36% of referred participants declined involvement, but retention was high, with 81.8% of ACT participants attending at least four sessions, and 64.3% of ACT participants reported being ‘satisfied’ or ‘very satisfied’ compared with 0% in the TAU group. Psychological flexibility as measured using the AAQ-II showed a trend towards modest improvement in the ACT group at 6 months. Other measures (quality of life, mood, memory satisfaction) also demonstrated small to modest positive trends.
Conclusions
It has proven feasible to conduct a randomised controlled trial of ACT versus TAU.
Hallucinations are common and distressing symptoms in Parkinson’s disease (PD). Treatment response in clinical trials is measured using validated questionnaires, including the Scale for Assessment of Positive Symptoms-Hallucinations (SAPS-H) and University of Miami PD Hallucinations Questionnaire (UM-PDHQ). The minimum clinically important difference (MCID) has not been determined for either scale. This study aimed to estimate a range of MCIDs for SAPS-H and UM-PDHQ using both consensus-based and statistical approaches.
Methods
A Delphi survey was used to seek opinions of researchers, clinicians, and people with lived experience. We defined consensus as agreement ≥75%. Statistical approaches used blinded data from the first 100 PD participants in the Trial for Ondansetron as Parkinson’s Hallucinations Treatment (TOP HAT, NCT04167813). The distribution-based approach defined the MCID as 0.5 of the standard deviation of change in scores from baseline at 12 weeks. The anchor-based approach defined the MCID as the average change in scores corresponding to a 1-point improvement in clinical global impression-severity scale (CGI-S).
Results
Fifty-one researchers and clinicians contributed to three rounds of the Delphi survey and reached consensus that the MCID was 2 points on both scales. Sixteen experts with lived experience reached the same consensus. Distribution-defined MCIDs were 2.6 points for SAPS-H and 1.3 points for UM-PDHQ, whereas anchor-based MCIDs were 2.1 and 1.3 points, respectively.
Conclusions
We used triangulation from multiple methodologies to derive the range of MCID estimates for the two rating scales, which was between 2 and 2.7 points for SAPS-H and 1.3 and 2 points for UM-PDHQ.
Journal editors often deal with allegations of research misconduct, defined by the Office of Research Integrity (ORI) in the United States as fabrication, falsification, and plagiarism. It is important that editors have a transparent and consistent process to deal with these allegations quickly and fairly. This process will include the authors and may include research integrity officers at the sponsoring institution as well as funders. Retractions may not be consistent with the ORI definition, for example, specifying inadequate peer-review and unreported conflict of interest, but nevertheless represent scientific misconduct.
Motor neuron disease (MND) is a progressive, fatal, neurodegenerative condition that affects motor neurons in the brain and spinal cord, resulting in loss of the ability to move, speak, swallow and breathe. Acceptance and commitment therapy (ACT) is an acceptance-based behavioural therapy that may be particularly beneficial for people living with MND (plwMND). This qualitative study aimed to explore plwMND’s experiences of receiving adapted ACT, tailored to their specific needs, and therapists’ experiences of delivering it.
Method:
Semi-structured qualitative interviews were conducted with plwMND who had received up to eight 1:1 sessions of adapted ACT and therapists who had delivered it within an uncontrolled feasibility study. Interviews explored experiences of ACT and how it could be optimised for plwMND. Interviews were audio recorded, transcribed and analysed using framework analysis.
Results:
Participants were 14 plwMND and 11 therapists. Data were coded into four over-arching themes: (i) an appropriate tool to navigate the disease course; (ii) the value of therapy outweighing the challenges; (iii) relevance to the individual; and (iv) involving others. These themes highlighted that ACT was perceived to be acceptable by plwMND and therapists, and many participants reported or anticipated beneficial outcomes in the future, despite some therapeutic challenges. They also highlighted how individual factors can influence experiences of ACT, and the potential benefit of involving others in therapy.
Conclusions:
Qualitative data supported the acceptability of ACT for plwMND. Future research and clinical practice should address expectations and personal relevance of ACT to optimise its delivery to plwMND.
Key learning aims
(1) To understand the views of people living with motor neuron disease (plwMND) and therapists on acceptance and commitment therapy (ACT) for people living with this condition.
(2) To understand the facilitators of and barriers to ACT for plwMND.
(3) To learn whether ACT that has been tailored to meet the specific needs of plwMND needs to be further adapted to potentially increase its acceptability to this population.
Disease-modifying therapies (DMTs) for Alzheimer’s disease (AD) are emerging following successful clinical trials of therapies targeting amyloid beta (Aβ) protofibrils or plaques. Determining patient eligibility and monitoring treatment efficacy and adverse events, such as Aβ-related imaging abnormalities, necessitates imaging with MRI and PET. The Canadian Consortium on Neurodegeneration in Aging (CCNA) Imaging Workgroup aimed to synthesize evidence and provide recommendations on implementing imaging protocols for AD DMTs in Canada.
Methods:
The workgroup employed a Delphi process to develop these recommendations. Experts from radiology, neurology, biomedical engineering, nuclear medicine, MRI and medical physics were recruited. Surveys and meetings were conducted to achieve consensus on key issues, including protocol standardization, scanner strength, monitoring protocols based on risk profiles and optimal protocol lengths. Draft recommendations were refined through multiple iterations and expert discussions.
Results:
The recommendations emphasize standardized acquisition imaging protocols across manufacturers and scanner strengths to ensure consistency and reliability of clinical treatment decisions, tailored monitoring protocols based on DMTs’ safety and efficacy profiles, consistent monitoring regardless of perceived treatment efficacy and MRI screening on 1.5T or 3T scanners with adapted protocols. An optimal protocol length of 20–30 minutes was deemed feasible; specific sequences are suggested.
Conclusion:
The guidelines aim to enhance imaging data quality and consistency, facilitating better clinical decision-making and improving patient outcomes. Further research is needed to refine these protocols and address evolving challenges with new DMTs. It is recognized that administrative, financial and logistical capacity to deliver additional MRI and positron emission tomography scans require careful planning.
Depression is common in people with dementia, and negatively affects quality of life.
Aims
This paper aims to evaluate the cost-effectiveness of an intervention for depression in mild and moderate dementia caused by Alzheimer's disease over 12 months (PATHFINDER trial), from both the health and social care and societal perspectives.
Method
A total of 336 participants were randomised to receive the adapted PATH intervention in addition to treatment as usual (TAU) (n = 168) or TAU alone (n = 168). Health and social care resource use were collected with the Client Service Receipt Inventory and health-related quality-of-life data with the EQ-5D-5L instrument at baseline and 3-, 6- and 12-month follow-up points. Principal analysis comprised quality-adjusted life-years (QALYs) calculated from the participant responses to the EQ-5D-5L instrument.
Results
The mean cost of the adapted PATH intervention was estimated at £1141 per PATHFINDER participant. From a health and social care perspective, the mean difference in costs between the adapted PATH and control arm at 12 months was −£74 (95% CI −£1942 to £1793), and from the societal perspective was −£671 (95% CI −£9144 to £7801). The mean difference in QALYs was 0.027 (95% CI −0.004 to 0.059). At £20 000 per QALY gained threshold, there were 74 and 68% probabilities of adapted PATH being cost-effective from the health and social care and societal perspective, respectively.
Conclusions
The addition of the adapted PATH intervention to TAU for people with dementia and depression generated cost savings alongside a higher quality of life compared with TAU alone; however, the improvements in costs and QALYs were not statistically significant.
Research articles in the clinical and translational science literature commonly use quantitative data to inform evaluation of interventions, learn about the etiology of disease, or develop methods for diagnostic testing or risk prediction of future events. The peer review process must evaluate the methodology used therein, including use of quantitative statistical methods. In this manuscript, we provide guidance for peer reviewers tasked with assessing quantitative methodology, intended to complement guidelines and recommendations that exist for manuscript authors. We describe components of clinical and translational science research manuscripts that require assessment including study design and hypothesis evaluation, sampling and data acquisition, interventions (for studies that include an intervention), measurement of data, statistical analysis methods, presentation of the study results, and interpretation of the study results. For each component, we describe what reviewers should look for and assess; how reviewers should provide helpful comments for fixable errors or omissions; and how reviewers should communicate uncorrectable and irreparable errors. We then discuss the critical concepts of transparency and acceptance/revision guidelines when communicating with responsible journal editors.
Hard cider is a sector of a maturing craft beverage industry that continues to experience growth in the United States. Cider is also experiencing challenges, however, such as competition from other alcohol markets, changing consumer preferences, the supply chain, and inflationary pressures. National policy changes may help promote more optimal outcomes for this sector, but public support is important to policy formation. This study uses survey data from a best-worst scaling experiment of consumers in four leading cider-producing states (Michigan, Washington, Wisconsin, and Vermont) to understand preferences toward ten broad cider policy initiatives. The results of multinomial logistic modeling reveal that consumers prefer policies mandating ingredients, nutrition facts, and allergen labeling across all ciders. The least preferred policy initiatives include allowing producers to use vintage on labeling and funding regional cider development. These results have important implications for stakeholders across the industry, including the benefits of labeling disclosures in marketing and the need to improve public awareness of barriers to cider industry development.
Background: Previously, our hospital manually built a static antibiogram from a surveillance system (VigiLanz) culture report. In 2019, a collaboration between the antimicrobial stewardship team (AST) and the infection control (IC) team set out to leverage data automation to create a dynamic antibiogram. The goal for the antibiogram was the ability to easily distribute and update for hospital staff, with the added ability to perform advanced tracking and surveillance of organism and drug susceptibilities for AST and IC. By having a readily available, accurate, and Clinical and Laboratory Standards Institute (CLSI)–compliant antibiogram, clinicians have the best available data on which to base their empiric antibiotic decisions. Methods: First, assessment of required access to hospital databases and selection of a visualization software (MS Power BI) was performed. Connecting SQL database feeds to Power BI enabled creation of a data model using DAX and M code to comply with the CLSI, generating the first isolate per patient per year. Once a visual antibiogram was created, it was validated against compiled antibiograms using data from the microbiology laboratory middleware (bioMerieux, Observa Integrated Data Management Software). This validation process uncovered some discrepancies between the 2 reference reports due to cascade reporting of susceptibilities. The Observa-derived data were used as the source of truth. The antibiogram prototype was presented to AST/IC members, microbiology laboratory leadership, and other stakeholders to assess functionality. Results: Following feedback and revisions by stakeholders, the new antibiogram was published on a hospital-wide digital platform (Fig. 1). Clinicians may view the antibiogram at any time on desktops from a firewall (or password)–protected intranet. The antibiogram view defaults to the current calendar year and users may interact with the antibiogram rows and columns without disrupting the integrity of the background databases or codes. Each year, simple refreshing of the Power BI antibiogram and changing of the calendar year allows us to easily and accurately update the antibiogram on the hospital-wide digital platform. Conclusions: This interdisciplinary collaboration resulted in a new dynamic, CLSI-compliant antibiogram with improved usability, increased visibility, and straightforward updating. In the future, a mobile version of the antibiogram may further enhance accessibility, bring more useful information to providers, and optimize AST/IC guidelines and education.
Patients refusing transportation is common EMS practice with potentially fatal outcomes. Determining which patients are at high risk for poor outcomes is poorly defined. This study described patients who experienced an out-of-hospital cardiac arrest (OHCA) within 24 hours of refusing transportation.
Method:
This is a retrospective, descriptive study of patients who had an OHCA within 24 hours of refusing EMS transportation between 2019 to 2021. Data was obtained from a large, urban medical control authority seeing 175,000 EMS calls annually. We reviewed patient demographics, EMS events when transportation was refused, and cardiac arrest outcome.
Results:
There were 6, 30, and 28 EMS refusals resulting in OHCA in 2019, 2020, and 2021. Patients who had OHCA were 65.7 (range 28-103) years old, and African American (54/64). Patients had HTN (36/64), diabetes (19/64), COPD (11/64), and CHF (7/64). Common complaints included breathing problems (17/64), near syncope (8/64) however chest pain was uncommon (4/64). One (28/64) or two (13/64) abnormal vital signs were present and missing vital signs (28/64) were common. Tachycardia (32.8%, 21/64), HTN (29.7%, 19/64), and hypotension (17.2%, 11/64) were more prevalent in the OHCA population compared to all refusal patients (Tachycardia 0.33% [1,978/598,416], HTN 2.27% [13,601/598,416], and hypotension 0.04% [218/598,416]). Patients were seen by both ALS (29/64) and BLS (35/64) providers. Most providers documented risk including death (38/64) though few contacted medical control (14/64). Return encounter for OHCA resulted in obvious deaths (23/64) or field termination (20/64). Few patients achieved ROSC (7/64).
Conclusion:
Patients who had an OHCA within 24 hours of refusing transport had underlying comorbidities and abnormal or missing vital signs. The patients experienced tachycardia, hypertension, and hypotension at a higher rate than the overall refusal population. Few patients obtained ROSC. Further research is needed to determine methods to mitigate poor outcomes and decrease refusals.
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to—and perhaps even harm—patients.
Depression in dementia is common, disabling and causes significant distress to patients and carers. Despite widespread use of antidepressants for depression in dementia, there is no evidence of therapeutic efficacy, and their use is potentially harmful in this patient group. Depression in dementia has poor outcomes and effective treatments are urgently needed. Understanding why antidepressants are ineffective in depression in dementia could provide insight into their mechanism of action and aid identification of new therapeutic targets. In this review we discuss why depression in dementia may be a distinct entity, current theories of how antidepressants work and how these mechanisms of action may be affected by disease processes in dementia. We also consider why clinicians continue to prescribe antidepressants in dementia, and novel approaches to understand and identify effective treatments for patients living with depression and dementia.
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this article, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to—and perhaps even harm—patients.
The New York Bight is undergoing rapid anthropogenic change amidst an apparent increase in baleen whale sightings. Though survey efforts have increased in recent years, the lack of published knowledge on baleen whale occurrence prior to these efforts impedes effective assessments of distributional or behavioural shifts due to increasing human activities. Here we synthesize opportunistic sightings of baleen whales from 1998–2017, which represent the majority of sightings data prior to recent survey efforts, and which are largely unpublished. Humpback and fin whales were the most commonly sighted species, followed by North Atlantic right whales and North Atlantic minke whales. Important behaviours such as feeding and nursing were observed, and most species (including North Atlantic right whales) were seen during all seasons. Baleen whales overlapped with multiple anthropogenic use areas, and all species, but of particular importance North Atlantic right whales, were sighted outside the spatial and temporal bounds of the Seasonal Management Areas for North Atlantic right whales. These opportunistic data are vital for providing a baseline and context of baleen whales in the New York Bight prior to broad-scale efforts and facilitate interpretation of current and future observations and trends, which can more accurately inform effective management and mitigation efforts.
Ice streams are warmed by shear strain, both vertical shear near the bed and lateral shear at the margins. Warm ice deforms more easily, establishing a positive feedback loop in an ice stream where fast flow leads to warm ice and then to even faster flow. Here, we use radar attenuation measurements to show that the Siple Coast ice streams are colder than previously thought, which we hypothesize is due to along-flow advection of cold ice from upstream. We interpret the attenuation results within the context of previous ice-temperature measurements from nearby sites where hot-water boreholes were drilled. These in-situ temperatures are notably colder than model predictions, both in the ice streams and in an ice-stream shear margin. We then model ice temperature using a 1.5-dimensional numerical model which includes a parameterization for along-flow advection. Compared to analytical solutions, we find depth-averaged temperatures that are colder by 0.7°C in the Bindschadler Ice Stream, 2.7°C in the Kamb Ice Stream and 6.2–8.2°C in the Dragon Shear Margin of Whillans Ice Stream, closer to the borehole measurements at all locations. Modelled cooling corresponds to shear-margin thermal strengthening by 3–3.5 times compared to the warm-ice case, which must be compensated by some other weakening mechanism such as material damage or ice-crystal fabric anisotropy.
Over the last three decades, Britain has witnessed an unprecedented rise in the number of people receiving welfare benefits that has provoked fears of a growing underclass and mass welfare dependency. This book provides the first comprehensive analysis of the reasons for this growth and subjects notions of welfare dependency to empirical test.
To examine the costs and cost-effectiveness of mirtazapine compared to placebo over 12-week follow-up.
Design:
Economic evaluation in a double-blind randomized controlled trial of mirtazapine vs. placebo.
Setting:
Community settings and care homes in 26 UK centers.
Participants:
People with probable or possible Alzheimer’s disease and agitation.
Measurements:
Primary outcome included incremental cost of participants’ health and social care per 6-point difference in CMAI score at 12 weeks. Secondary cost-utility analyses examined participants’ and unpaid carers’ gain in quality-adjusted life years (derived from EQ-5D-5L, DEMQOL-Proxy-U, and DEMQOL-U) from the health and social care and societal perspectives.
Results:
One hundred and two participants were allocated to each group; 81 mirtazapine and 90 placebo participants completed a 12-week assessment (87 and 95, respectively, completed a 6-week assessment). Mirtazapine and placebo groups did not differ on mean CMAI scores or health and social care costs over the study period, before or after adjustment for center and living arrangement (independent living/care home). On the primary outcome, neither mirtazapine nor placebo could be considered a cost-effective strategy with a high level of confidence. Groups did not differ in terms of participant self- or proxy-rated or carer self-rated quality of life scores, health and social care or societal costs, before or after adjustment.
Conclusions:
On cost-effectiveness grounds, the use of mirtazapine cannot be recommended for agitated behaviors in people living with dementia. Effective and cost-effective medications for agitation in dementia remain to be identified in cases where non-pharmacological strategies for managing agitation have been unsuccessful.
Only a limited number of patients with major depressive disorder (MDD) respond to a first course of antidepressant medication (ADM). We investigated the feasibility of creating a baseline model to determine which of these would be among patients beginning ADM treatment in the US Veterans Health Administration (VHA).
Methods
A 2018–2020 national sample of n = 660 VHA patients receiving ADM treatment for MDD completed an extensive baseline self-report assessment near the beginning of treatment and a 3-month self-report follow-up assessment. Using baseline self-report data along with administrative and geospatial data, an ensemble machine learning method was used to develop a model for 3-month treatment response defined by the Quick Inventory of Depression Symptomatology Self-Report and a modified Sheehan Disability Scale. The model was developed in a 70% training sample and tested in the remaining 30% test sample.
Results
In total, 35.7% of patients responded to treatment. The prediction model had an area under the ROC curve (s.e.) of 0.66 (0.04) in the test sample. A strong gradient in probability (s.e.) of treatment response was found across three subsamples of the test sample using training sample thresholds for high [45.6% (5.5)], intermediate [34.5% (7.6)], and low [11.1% (4.9)] probabilities of response. Baseline symptom severity, comorbidity, treatment characteristics (expectations, history, and aspects of current treatment), and protective/resilience factors were the most important predictors.
Conclusions
Although these results are promising, parallel models to predict response to alternative treatments based on data collected before initiating treatment would be needed for such models to help guide treatment selection.