We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Advances in artificial intelligence (AI) have great potential to help address societal challenges that are both collective in nature and present at national or transnational scale. Pressing challenges in healthcare, finance, infrastructure and sustainability, for instance, might all be productively addressed by leveraging and amplifying AI for national-scale collective intelligence. The development and deployment of this kind of AI faces distinctive challenges, both technical and socio-technical. Here, a research strategy for mobilising inter-disciplinary research to address these challenges is detailed and some of the key issues that must be faced are outlined.
Disease-modifying therapies (DMTs) for Alzheimer’s disease (AD) are emerging following successful clinical trials of therapies targeting amyloid beta (Aβ) protofibrils or plaques. Determining patient eligibility and monitoring treatment efficacy and adverse events, such as Aβ-related imaging abnormalities, necessitates imaging with MRI and PET. The Canadian Consortium on Neurodegeneration in Aging (CCNA) Imaging Workgroup aimed to synthesize evidence and provide recommendations on implementing imaging protocols for AD DMTs in Canada.
Methods:
The workgroup employed a Delphi process to develop these recommendations. Experts from radiology, neurology, biomedical engineering, nuclear medicine, MRI and medical physics were recruited. Surveys and meetings were conducted to achieve consensus on key issues, including protocol standardization, scanner strength, monitoring protocols based on risk profiles and optimal protocol lengths. Draft recommendations were refined through multiple iterations and expert discussions.
Results:
The recommendations emphasize standardized acquisition imaging protocols across manufacturers and scanner strengths to ensure consistency and reliability of clinical treatment decisions, tailored monitoring protocols based on DMTs’ safety and efficacy profiles, consistent monitoring regardless of perceived treatment efficacy and MRI screening on 1.5T or 3T scanners with adapted protocols. An optimal protocol length of 20–30 minutes was deemed feasible; specific sequences are suggested.
Conclusion:
The guidelines aim to enhance imaging data quality and consistency, facilitating better clinical decision-making and improving patient outcomes. Further research is needed to refine these protocols and address evolving challenges with new DMTs. It is recognized that administrative, financial and logistical capacity to deliver additional MRI and positron emission tomography scans require careful planning.
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.
Despite high UVB radiation from the sun in Australia (the primary source of vitamin D), vitamin D deficiency (serum 25-hydroxyvitamin D concentrations [25(OH)D] <50 nmol/L) is prevalent among Aboriginal and Torres Strait Islander peoples (27% of adults nationally; 39% of adults living in remote areas)(1). Vitamin D deficiency affects musculoskeletal health and may be associated with non-communicable diseases, such as type 2 diabetes and cardiovascular diseases, prevalent in Aboriginal and Torres Strait Islander peoples.(2, 3) Alternative to UVB radiation, vitamin D can also be obtained from foods (e.g., fish, eggs, and meat) and supplements. However, vitamin D intake in Aboriginal and Torres Strait Islander peoples is currently unknown. Hence, we aimed to provide the first estimate of absolute vitamin D intake in Aboriginal and Torres Strait Islander peoples. We used food consumption data from the 2012-2013 National Aboriginal and Torres Strait Islander Nutrition and Physical Activity Survey and vitamin D food composition data for vitamin D3, 25(OH)D3, vitamin D2, and 25(OH)D2. Absolute vitamin D intake was estimated by sex and remote and non-remote areas using bioactivity factors (BF) of 1 and 5 for 25(OH)D, which may be up to five times more bioactive than vitamin D. The estimated average requirement for vitamin D intake recommended by the Institute of Medicine is 10 μg/day(4). The estimated absolute vitamin D intake from food and beverages was low for Aboriginal and Torres Strait Islander peoples. The mean estimated absolute vitamin D intake of Aboriginal and Torres Strait Islander peoples was 2.9 μg/day and 5.3 μg/day for BF 1 and 5, respectively. Males had a higher mean intake (3.2 μg/day, BF 1 and 5.9 μg/day, BF 5) than females (2.6 μg/day, BF 1 and 4.7 μg/day, BF 5). Vitamin D intake was 2.9 μg/day (BF 1) and 5.2 μg/day (BF 5) in non-remote and 2.8 μg/day (BF 1) and 5.4 μg/day (BF 5) in remote areas. The high prevalence of vitamin D deficiency and low vitamin D intake highlights a need to promote vitamin D sufficiency through public health policies. The results from this study can be used to model food fortification strategies to provide evidence for the development of nutrition policies to improve the vitamin D status of the Aboriginal and Torres Strait Islander population.
Accumulating evidence from case-control and population studies suggests attention-deficit/hyperactivity disorder (ADHD) confers a 2- to 5-fold risk of all-cause dementia later in life. Here, we investigate vascular burden as a potential mediator of this relationship, because vascular integrity is well known to be compromised in ADHD (due to chronic obesity, diabetes, and hypertension) and is also a robust risk factor for neurodegeneration (due to reduced cerebral blood flow). We use brain white matter hyperintensities (WMH) as a measure of vascular burden.
Participants and Methods:
Thirty-nine adults aged 48-81 years with clinical ADHD, and 37 matched controls, completed neuropsychological testing and 1.5 T structural neuroimaging. None had stroke. Cognitive tests were demographically-adjusted to Z scores using regression-based norms generated from the control group, and averaged across tests within domains of short- and long-term verbal memory (forward digit span, California Verbal Learning Test, Logical Memory), visual memory (Visual Recognition, Rey Complex Figure), processing speed (coding, trails A, Stroop word-reading and color-naming), language (Boston Naming Test, semantic fluency), visuoconstruction (clock drawing, Rey Complex Figure copy), and executive function (backward digit span, trails B, phonemic fluency, Stroop inhibition, Wisconsin Card Sorting Test). Total WMH volumes (i.e., combined periventricular and deep) within subcortical, temporal, frontal, parietal, and occipital regions were individually divided by regional volumes to produce a proportion of each region representing WMH, then log-transformed to correct for skew. Age-corrected linear regression quantified total effects of ADHD on cognition; when these were significant, mediation models quantified the direct effects of ADHD on WMH volumes and the direct effect of WMH volumes on cognition. Sobel’s test estimated indirect effects of ADHD on cognition via WMH.
Results:
Group had a significant total effect on Processing Speed (ß=-1.154, p<.001) and on Executive Functioning (ß=-0.587, p=.004), where ADHD participants had lower composite scores (M=-1.10, SD=1.76 and M=-0.54, SD=1.14 respectively) than controls (M=0.02, SD=0.74; M=0.00, SD=0.49). Only frontal-lobe WMH had direct effects on Processing Speed (ß=-0.315, p=.012) and Executive Functioning (ß=-0.273, p<.001). The direct effect of ADHD on frontal WMH was significant (ß=-0.734, p=.016), and Sobel’s tests supported an indirect effect of ADHD on Executive Functioning (z=2.079, p=.038) but not Processing Speed (z=1.785, p=.074) via WMH. Because the effect of ADHD on WMH was negative (i.e., fewer WMH in ADHD) despite worse cognition than controls, we tested the a posteriori hypothesis that WMH burden may be relatively more deleterious for ADHD than controls. We found considerably stronger negative correlations between total WMH volumes and Processing Speed (r=-.423, p=.009) and Executive Functioning (r=-.528, p<.001) in the ADHD group than in controls (r=-.231, p=.175 and r=-.162, p=.346, respectively), even though total whole-brain proportion of WMH (M=0.15%, SD=0.27; Mann-Whitney l/=430.0, p=.002) and frontal-lobe proportion of WMH volumes (M=0.33%, SD=0.51; Mann-Whitney U=464.0, p=.007) were lower in ADHD than in controls (M=0.29%, SD=0.42 and M=0.66%, SD=0.88, respectively).
Conclusions:
WMH burden contributes significantly to the relationship between ADHD and cognition, but ADHD remains an independent contributor to worse processing speed and executive functioning in older adults. Vascular burden may have relatively more deleterious effects on cognition in ADHD, potentially due to decades of accumulated allostatic load, whereas healthy controls can accumulate greater amounts of WMH before cognition is impacted.
Previous findings suggest that time setting errors (TSEs) in the Clock Drawing Test (CDT) may be related mainly to impairments in semantic and executive function. Recent attempts to dissociate the classic stimulus-bound error (setting the time to “10 to 11” instead of “10 past 11”) from other TSEs, did not support hypotheses regarding this error being primarily executive in nature or different from other time setting errors in terms of neurocognitive correlates. This study aimed to further investigate the cognitive correlates of stimulus-bound errors and other TSEs, in order to trace possible underlying cognitive deficits.
Methods:
We examined cognitive test performance of participants with preliminary diagnoses associated with mild cognitive impairment. Among 490 participants, we identified clocks with stimulus-bound errors (n = 78), other TSEs (n = 41), other errors not related to time settings (n = 176), or errorless clocks (n = 195).
Results:
No differences were found on any dependent measure between the stimulus-bound and the other TSErs groups. Group comparisons suggested TSEs in general, to be associated with lower performance on various cognitive measures, especially on semantic and working memory measures. Regression analysis further highlighted semantic and verbal working memory difficulties as being the most prominent deficits associated with these errors.
Conclusion:
TSEs in the CDT may indicate underlying deficits in semantic function and working memory. In addition, results support previous findings related to the diagnostic value of TSEs in detecting cognitive impairment.
Obstructive sleep apnea (OSA) is prevalent after stroke and associated with recurrent stroke, prolonged hospitalization, and decreased functional recovery. Sex differences in post-stroke OSA remain underexplored. The objective of this study was to evaluate sex differences in functional outcomes, stroke and OSA severity, and clinical manifestations of OSA in stroke patients with OSA.
Methods:
We retrospectively evaluated data from three previously conducted studies. Study patients had an imaging-confirmed stroke and had been found to have OSA (apnea–hypopnea index [AHI] ≥ 5) on either in-laboratory polysomnography or home sleep apnea testing performed within 1 year of their stroke. Linear regression models were used to evaluate study outcomes.
Results:
In total, 171 participants with post-stroke OSA (117 males [68.4%] and 54 females [31.6%]) were included. Female sex was an independent predictor for greater functional impairment (β = 0.37, 95% CI 0.029–0.71, p = 0.03), increased stroke severity (β = 1.009, 95% CI 0.032–1.99, p = 0.04), and greater post-stroke depressive symptoms (β = 3.73, 95% CI 0.16–7.29, p = 0.04). Female sex was associated with lower OSA severity, as measured by the AHI (β = –5.93, 95% CI –11.21– –0.66). Sex was not an independent predictor of specific symptoms of OSA such as daytime sleepiness, snoring, tiredness, and observed apneas.
Conclusion:
Females with post-stroke OSA had poorer functional outcomes and more severe strokes compared to males, despite having lower OSA severity. Females with post-stroke OSA also exhibited more depressive symptoms. Understanding sex differences in patients with post-stroke OSA will likely facilitate better recognition of OSA and potentially improve clinical outcomes.
Alzheimer’s disease (AD) is experienced by > 600,000 Canadians. Disease-modifying therapies (DMTs) for earlier stages of disease are in development. Existing health system capacity constraints and the need for biomarker-driven diagnostics to confirm DMT eligibility are concerning. This study aimed to characterize the capacity gap related to early AD (eAD) treatment with DMTs in Canada.
Methods:
A capacity model was developed to simulate the flow of a patient from screening to treatment for eAD to quantify the gap between available and required healthcare resources and qualify the bottlenecks restricting the patient journey at a provincial and national level. The model inputs (epidemiological, human resource, and clinical) were evidence-based, healthcare professional-, and patient advocate-informed.
Results:
The model estimated that nationally < 2% of patients would have access to the required healthcare resources for treatment with a DMT. Eligibility assessment represented the step with the largest capacity gap across all provinces, with a wait list of about 382,000 Canadians one year following DMT introduction. The top three resource gaps included AD specialist time and positron emission tomography and magnetic resonance imaging exam slots. Sensitivity analysis showed that full reliance on cerebrospinal fluid for eligibility testing increased capacity for assessment by about 47,000 patients.
Conclusion:
This model highlights that the Canadian health system is critically under-resourced to diagnose, assess, and treat patients with eAD with DMT. It underscores an urgent need for national policy and provincial resource allocation to close the gap.
Background: Saccade and pupil responses are potential neurodegenerative disease biomarkers due to overlap between oculomotor circuitry and disease-affected areas. Instruction-based tasks have previously been examined as biomarker sources, but are arduous for patients with limited cognitive abilities; additionally, few studies have evaluated multiple neurodegenerative pathologies concurrently. Methods: The Ontario Neurodegenerative Disease Research Initiative recruited individuals with Alzheimer’s disease (AD), mild cognitive impairment (MCI), amyotrophic lateral sclerosis (ALS), frontotemporal dementia, progressive supranuclear palsy, or Parkinson’s disease (PD). Patients (n=274, age 40-86) and healthy controls (n=101, age 55-86) viewed 10 minutes of frequently changing video clips without instruction while their eyes were tracked. We evaluated differences in saccade and pupil parameters (e.g. saccade frequency and amplitude, pupil size, responses to clip changes) between groups. Results: Preliminary data indicates low-level behavioural alterations in multiple disease cohorts: increased centre bias, lower overall saccade rate and reduced saccade amplitude. After clip changes, patient groups generally demonstrated lower saccade rate but higher microsaccade rate following clip change to varying degrees. Additionally, pupil responses were blunted (AD, MCI, ALS) or exaggerated (PD). Conclusions: This task may generate behavioural biomarkers even in cognitively impaired populations. Future work should explore the possible effects of factors such as medication and disease stage.
To understand which anthropometric diagnostic criteria best discriminate higher from lower risk of death in children and explore programme implications.
Design:
A multiple cohort individual data meta-analysis of mortality risk (within 6 months of measurement) by anthropometric case definitions. Sensitivity, specificity, informedness and inclusivity in predicting mortality, face validity and compatibility with current standards and practice were assessed and operational consequences were modelled.
Setting:
Community-based cohort studies in twelve low-income countries between 1977 and 2013 in settings where treatment of wasting was not widespread.
Participants:
Children aged 6 to 59 months.
Results:
Of the twelve anthropometric case definitions examined, four (weight-for-age Z-score (WAZ) <−2), (mid-upper arm circumference (MUAC) <125 mm), (MUAC < 115 mm or WAZ < −3) and (WAZ < −3) had the highest informedness in predicting mortality. A combined case definition (MUAC < 115 mm or WAZ < −3) was better at predicting deaths associated with weight-for-height Z-score <−3 and concurrent wasting and stunting (WaSt) than the single WAZ < −3 case definition. After the assessment of all criteria, the combined case definition performed best. The simulated workload for programmes admitting based on MUAC < 115 mm or WAZ < −3, when adjusted with a proxy for required intensity and/or duration of treatment, was 1·87 times larger than programmes admitting on MUAC < 115 mm alone.
Conclusions:
A combined case definition detects nearly all deaths associated with severe anthropometric deficits suggesting that therapeutic feeding programmes may achieve higher impact (prevent mortality and improve coverage) by using it. There remain operational questions to examine further before wide-scale adoption can be recommended.
To compare the prognostic value of mid-upper arm circumference (MUAC), weight-for-height Z-score (WHZ) and weight-for-age Z-score (WAZ) for predicting death over periods of 1, 3 and 6 months follow-up in children.
Design:
Pooled analysis of twelve prospective studies examining survival after anthropometric assessment. Sensitivity and false-positive ratios to predict death within 1, 3 and 6 months were compared for three individual anthropometric indices and their combinations.
Setting:
Community-based, prospective studies from twelve countries in Africa and Asia.
Participants:
Children aged 6–59 months living in the study areas.
Results:
For all anthropometric indices, the receiver operating characteristic curves were higher for shorter than for longer durations of follow-up. Sensitivity was higher for death with 1-month follow-up compared with 6 months by 49 % (95 % CI (30, 69)) for MUAC < 115 mm (P < 0·001), 48 % (95 % CI (9·4, 87)) for WHZ < -3 (P < 0·01) and 28 % (95 % CI (7·6, 42)) for WAZ < -3 (P < 0·005). This was accompanied by an increase in false positives of only 3 % or less. For all durations of follow-up, WAZ < -3 identified more children who died and were not identified by WHZ < -3 or by MUAC < 115 mm, 120 mm or 125 mm, but the use of WAZ < -3 led to an increased false-positive ratio up to 16·4 % (95 % CI (12·0, 20·9)) compared with 3·5 % (95 % CI (0·4, 6·5)) for MUAC < 115 mm alone.
Conclusions:
Frequent anthropometric measurements significantly improve the identification of malnourished children with a high risk of death without markedly increasing false positives. Combining two indices increases sensitivity but also increases false positives among children meeting case definitions.
Patients suffering from the behavioral variant of Frontotemporal Dementia (FTD-b) often exaggerate their abilities. Are those errors in judgment limited to domains in which patients under-perform, or do FTD-b patients overestimate their abilities in other domains? Is overconfidence in FTD-b patients domain-specific or domain-general? To address this question, we asked patients at early stages of FTD-b to judge their performance in two domains (attention, perception) in which they exhibit relatively spared abilities. In both domains, FTD-b patients overestimated their performance relative to patients with Dementia of Alzheimer Type (DAT) and healthy elderly subjects. Results are consistent with a domain-general deficit in metacognitive judgment. We discuss these findings in relation to “regression to the mean” accounts of overconfidence and the role of emotions in metacognitive judgments.
The legal brief is a primary vehicle by which lawyers seek to persuade appellate judges. Despite wide acceptance that briefs are important, empirical scholarship has yet to establish their influence on the Supreme Court or fully explore justices’ preferences regarding them. We argue that emotional language conveys a lack of credibility to justices and thereby diminishes the party’s likelihood of garnering justices’ votes. The data concur. Using an automated textual analysis program, we find that parties who employ less emotional language in their briefs are more likely to win a justice’s vote, a result that holds even after controlling for other features correlated with success, such as case quality. These findings suggest that advocates seeking to influence judges can enhance their credibility and attract justices’ votes by employing measured, objective language.
To evaluate whether cerebrospinal fluid biomarkers, apolipoprotein e4, neuroimaging abnormalities, and neuropsychological data differentially predict progression from mild cognitive impairment (MCI) to dementia for men and women.
Methods:
Participants who were diagnosed with MCI at baseline (n = 449) were classified as either progressing to Alzheimer’s dementia at follow-up or as not progressing. Men and women were first compared using bivariate analyses. Sex-stratified Cox proportional hazard regressions were performed examining the relationship between baseline data and the likelihood of progressing to dementia. Sex interactions were subsequently examined.
Results:
Cox proportional hazard regression controlling for age and education indicated that all variables significantly predicted subsequent progression to dementia for men and women. Sex interactions indicated that only Rey Auditory Verbal Learning Test (RAVLT) delayed recall and Functional Activities Questionnaire (FAQ) were significantly stronger risk factors for women. When all variables were entered into a fully adjusted model, significant risk factors for women were Aβ42, hippocampal volume, RAVLT delayed recall, Boston Naming Test, and FAQ. In contrast, for men, Aβ42, p-tau181, p-tau181/Aβ42, hippocampal volume, category fluency and FAQ were significant risk factors. Interactions with sex were only significant for p-tau181/Aβ42 and RAVLT delayed recall for the fully adjusted model.
Conclusions:
Men and women with MCI may to differ for which factors predict subsequent dementia although future analyses with greater power are needed to evaluate sex differences. We hypothesize that brain and cognitive reserve theories may partially explain these findings.
Background: Eye movements reveal neurodegenerative disease processes due to overlap between oculomotor circuitry and disease-affected areas. Characterizing oculomotor behaviour in context of cognitive function may enhance disease diagnosis and monitoring. We therefore aimed to quantify cognitive impairment in neurodegenerative disease using saccade behaviour and neuropsychology. Methods: The Ontario Neurodegenerative Disease Research Initiative recruited individuals with neurodegenerative disease: one of Alzheimer’s disease, mild cognitive impairment, amyotrophic lateral sclerosis, frontotemporal dementia, Parkinson’s disease, or cerebrovascular disease. Patients (n=450, age 40-87) and healthy controls (n=149, age 42-87) completed a randomly interleaved pro- and anti-saccade task (IPAST) while their eyes were tracked. We explored the relationships of saccade parameters (e.g. task errors, reaction times) to one another and to cognitive domain-specific neuropsychological test scores (e.g. executive function, memory). Results: Task performance worsened with cognitive impairment across multiple diseases. Subsets of saccade parameters were interrelated and also differentially related to neuropsychology-based cognitive domain scores (e.g. antisaccade errors and reaction time associated with executive function). Conclusions: IPAST detects global cognitive impairment across neurodegenerative diseases. Subsets of parameters associate with one another, suggesting disparate underlying circuitry, and with different cognitive domains. This may have implications for use of IPAST as a cognitive screening tool in neurodegenerative disease.
Background:Candida auris is an emerging multidrug-resistant yeast that is transmitted in healthcare facilities and is associated with substantial morbidity and mortality. Environmental contamination is suspected to play an important role in transmission but additional information is needed to inform environmental cleaning recommendations to prevent spread. Methods: We conducted a multiregional (Chicago, IL; Irvine, CA) prospective study of environmental contamination associated with C. auris colonization of patients and residents of 4 long-term care facilities and 1 acute-care hospital. Participants were identified by screening or clinical cultures. Samples were collected from participants’ body sites (eg, nares, axillae, inguinal creases, palms and fingertips, and perianal skin) and their environment before room cleaning. Daily room cleaning and disinfection by facility environmental service workers was followed by targeted cleaning of high-touch surfaces by research staff using hydrogen peroxide wipes (see EPA-approved product for C. auris, List P). Samples were collected immediately after cleaning from high-touch surfaces and repeated at 4-hour intervals up to 12 hours. A pilot phase (n = 12 patients) was conducted to identify the value of testing specific high-touch surfaces to assess environmental contamination. High-yield surfaces were included in the full evaluation phase (n = 20 patients) (Fig. 1). Samples were submitted for semiquantitative culture of C. auris and other multidrug-resistant organisms (MDROs) including methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococcus (VRE), extended-spectrum β-lactamase–producing Enterobacterales (ESBLs), and carbapenem-resistant Enterobacterales (CRE). Times to room surface contamination with C. auris and other MDROs after effective cleaning were analyzed. Results:Candida auris colonization was most frequently detected in the nares (72%) and palms and fingertips (72%). Cocolonization of body sites with other MDROs was common (Fig. 2). Surfaces located close to the patient were commonly recontaminated with C. auris by 4 hours after cleaning, including the overbed table (24%), bed handrail (24%), and TV remote or call button (19%). Environmental cocontamination was more common with resistant gram-positive organisms (MRSA and, VRE) than resistant gram-negative organisms (Fig. 3). C. auris was rarely detected on surfaces located outside a patient’s room (1 of 120 swabs; <1%). Conclusions: Environmental surfaces near C. auris–colonized patients were rapidly recontaminated after cleaning and disinfection. Cocolonization of skin and environment with other MDROs was common, with resistant gram-positive organisms predominating over gram-negative organisms on environmental surfaces. Limitations include lack of organism sequencing or typing to confirm environmental contamination was from the room resident. Rapid recontamination of environmental surfaces after manual cleaning and disinfection suggests that alternate mitigation strategies should be evaluated.
East of England is considered the “bread basket” of the UK, supplying domestic and global food markets but it is under pressures from policy, economic and environmental challenges. This chapter studies with a mixed-method approach the risks affecting the arable farming sector in the East of England, describing the role of knowledge networks and learning for resilience.
Current agricultural systems in Europe are locked in to environmentally unsustainable practices due to a range of institutional, cultural, social and financial factors. These are compounded by environmental challenges. This chapter assesses three case studies in Europe and their respective stakeholder perspectives on challenges and potential solutions towards greater environmental sustainability.
How can psychiatrists best provide care in complex, sometimes overwhelming disasters? COVID-19 strained every aspect of health care to the breaking point, from finances to pharmaceutical supply lines. We can expect more challenges to prescribing in the future, as shown by recent hurricanes in Puerto Rico, fires in California, and ice storms in Texas. When medications become scarce or inaccessible, then clinicians need to make difficult prescribing decisions. We suggest that a culture of deprescribing, a systematic approach to reducing or simplifying medications, could be applied to a wide variety of crises. Deprescribing is defined as the planned reduction of medications to improve patient health or to reduce side effects (see deprescribing.org). It has been used to reduce polypharmacy in geriatric and other complex populations. It provides evidence-based guidance for phasing out many classes of medications. It is part of the larger program to reduce waste in health care and to make pharmacy more rational. Disasters and resource scarcity, however, require a different approach. In contrast to routine care focused on individual patients, crisis standards of care (CSC) shift the clinical focus to the community. Instead of deprescribing guidelines for individual clinicians, CSC deprescribing would be national policies addressing shortages of important medications. We did a scoping review looking for studies of deprescribing in a crisis.
Methods/Results
We extracted 1340 references in Google Scholar 2016 to 2021 using (deprescribing) AND (disaster OR crisis OR climate OR pandemic OR supply lines ). A scan of texts found 160 references matching our criteria, and only 19 of them addressed deprescribing as a strategy to strengthen health systems or providers in an emergency. Most of those were related to scarce supplies during COVID, and a few addressed the carbon impact of medications. We also reviewed related literatures on medication supply chain vulnerabilities, WHO Essential Medicines, and healthcare rationing.
Implications
Deprescribing gained attention during the COVID pandemic, responding to both disrupted supply lines and improving patient safety. Writers concerned with climate change support deprescribing to reduce the carbon impact of medications. Deprescribing as crisis policy could help streamline national stockpiles, supply chains, and manufacturing. Education could make deprescribing second nature for clinicians, potentially decreasing stress and increasing flexibility in future emergencies. Barriers to deprescribing generally include cultural inertia, industry lobbyists, education, and malpractice fears. In a crisis, deprescribing guidelines could provide clinicians with confidence and flexibility while conserving scarce resources. Research is needed to evaluate deprescribing guidelines for crises, especially ensuring equity in how they reduce polypharmacy and save money.