We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Integrating psychosocial health services into paediatric surgical specialty care is essential for addressing behavioural and psychological aspects of illness and reducing healthcare disparities. This is crucial for patients facing CHD, who are at higher risk for depression, anxiety, and attention-deficit/hyperactivity disorder, which is significantly influenced by their caregivers’ mental well-being.
Methods:
The Pediatric Psychosocial Preventative Health Model framework was utilised by a psychosocial team to assess biopsychosocial needs in CHD patients during their first cardiac surgery evaluations. Patient and family needs were categorised into universal, targeted, and clinical tiers, allowing for responsive interdisciplinary services. Screening tools such as the Psychosocial Assessment Tool, Pediatric Quality of Life Inventory, and Depression, Anxiety, and Stress Scales were used during initial consultations to guide appropriate interventions and referrals.
Results:
Universal-tier patients received comprehensive support focused on preventive measures, resource access, and education to promote resilience. Targeted-tier care involved intensive, collaborative efforts, providing specialized psychological evaluations, and one-on-one time with experts. Clinical-tier families required specialised, intensive interventions such as advanced cognitive behavioural therapy and medication management. The Pediatric Psychosocial Preventative Health Model framework and psychosocial team workflow allow for individualised management strategies, ensuring that each family received timely and appropriate interventions based on their unique needs.
Conclusion:
Integrating psychosocial services into initial surgical evaluations is critical for addressing CHD patients’ psychological and social needs, promoting an interdisciplinary approach that enhances overall family functioning and well-being.
Single ventricle CHD requires lifelong care, yet its broader impact on patients and families remains unclear. Engaging patients in care improvement can strengthen relationships and outcomes.
Objectives:
This study evaluates how individuals with single ventricle CHD prioritise gaps in care based on personal and family impact.
Methods:
Using Mery et al.’s identified care gaps, a survey was distributed to parents of children with single ventricle CHD and adults with single ventricle CHD in English or Spanish. Participants rated each gap from 1(not important) to 10(extremely important), with a “Not Applicable” option. Responses were analysed using median, weighted, and total rating scores. Sociodemographic data were examined, and univariate analysis and a race/ethnicity and insurance matrix were conducted on parent responses.
Results:
Among 36 complete responses, 30(83.3%) were parents and 6(16.7%) patients. Most parents were female(29,96.7%), White non-Hispanic(24,80.0%), with 17(6.7%) having privately insured children. Median child age was 6.5[interquartile range: 3.0–12.8] years, and 55.3% had Hypoplastic Left Heart Syndrome. The highest-rated gap was “Uncertainty of prognosis in adulthood” (9.5[interquartile range: 8.0–10.0]). The lowest was “Pregnancy termination presented repeatedly” (1.0[interquartile range: 1.0–7.0]). Non-White parents rated “Transition to adult healthcare” (p = 0.017) and “Navigating resources” (p = 0.037) higher. Patients (median age 33.0 years) prioritised “Rescheduling surgical procedures” and “Transition to adult healthcare” (both 10.0). “Support in family planning” had the highest total rating score(12). The lowest-rated was “Limited guidance on transition to adolescence” (0.0[interquartile range:0.0–0.0]).
Conclusions:
Patients and families prioritise care gaps differently. Aligning their perspectives with clinical expertise can guide tailored solutions to improve outcomes for single ventricle CHD patients.
South Asians are among the fastest-growing immigrant population group in the United States (U.S.) with a unique disease risk profile. Due in part to immigration and acculturation factors, South Asians engage differently with behavioural risk factors (e.g. smoking, alcohol intake, physical activity, sedentary behaviour, and diet) for hypertension, which may be modified for the primary prevention of cardiovascular disease. Using data from the Mediators of Atherosclerosis in South Asians Living in America cohort, we conducted a cross-sectional analysis to evaluate the association between behavioural risk factors for cardiovascular disease and diet. We created a behavioural risk factor score based on smoking status, alcohol consumption, physical activity, and TV watching. We also calculated a Dietary Approaches to Stop Hypertension (DASH) dietary score based on inclusion of relevant dietary components. We used both scores to examine the association between engaging with risk factors for hypertension and the DASH diet among a cohort of South Asian adults. We found that participants with 3–4 behavioural risk factors had a DASH diet score that was 3 units lower than those with no behavioural risk factors (aβ: –3.25; 95% CI: –4.28, –2.21) and were 86% less likely to have a DASH diet score in the highest category compared to the lowest DASH diet score category (aOR: 0.14; 95% CI: 0.05, 0.37) in the fully adjusted models. These findings highlight the relationship between behavioural risk factors for hypertension among South Asians in the U.S.
To assess the impact of the COVID-19 pandemic on first-episode psychosis (FEP) presentations across two Early Intervention in Psychosis (EIP) services in Ireland, by comparing pre-pandemic and post-pandemic cohorts.
Methods:
A cross-sectional observational design with retrospective medical record review was employed. The study population comprised 187 FEP patients (77 in pre-pandemic and 110 in post-pandemic cohort). Outcomes measured included duration of untreated psychosis (DUP), FEP presentation numbers, referral sources, global assessment of functioning scores, inpatient admissions, substance misuse and service delivery methods. Statistical analyses utilised chi-square tests to assess categorical variables, Mann–Whitney U tests to compare non-normally distributed continuous variables and Kruskal–Wallis tests to examine interactions between categorical and continuous variables.
Results:
A significant increase in FEP presentations was observed in the post-pandemic cohort (p = 0.003), with an increase in all urban areas and a decrease in the study’s only rural area. The difference in DUP between cohorts was not significant. However, significant interaction between gender, cohort and DUP was shown (p = 0.008), with women in the post-pandemic cohort experiencing longer DUP (p = 0.01). A significant rise in telephone (p = 0.05) and video consultations (p = 0.001) offered was observed, in the post-pandemic cohort. A similar number of in-person appointments were attended across both cohorts.
Conclusions:
This study highlights the impact of the pandemic on FEP presentations, particularly rurally and regarding increased DUP among women. These findings underscore the need for flexible EIP services to respond to public health crises. Despite increased presentations, services adapted, maintaining service continuity through telehealth and modified in-person contact.
Climate change is no longer a problem for future generations and the impact is already taking a toll in many parts of the world. Climate change has already caused substantial, and increasingly irreversible, damage to ecosystems. All these issues combined will inevitably lead to an increase in human suffering and forced displacement. This has significant ramifications for health care systems. In this editorial we outline how climate change is already impacting both physical and mental health. Health professionals have a role to play in addressing this great challenge of our time. Health professionals should reflect on how to promote means of climate change mitigation and adaptation within their spheres of influence – clinical, education, advocacy, administration, and research.
Providing access to food in schools can serve as a platform for food system transformation, while simultaneously improving educational outcomes and livelihoods. Locally grown and procured food is a nutritious, healthy, and efficient way to provide schoolchildren with a daily meal while, at the same time, improving opportunities for smallholder farmers(1). While there is significant potential for school food provision activities to support healthy dietary behaviours in the Pacific Islands region, there is limited evidence of these types of activities(2), including scope and links to local food production in the region. Therefore, the aim of this scoping study was to understand the current state of school food activities (school feeding, gardening and other food provision activities) and any current, and potential links to local agriculture in the Pacific Islands. A regional mapping activity was undertaken, initially covering 22 Pacific Island countries. The mapping included two steps: 1) a desk based scoping review including peer-reviewed and grey literature (2007-2022) and 2) One-hour semi-structured online Zoom interviews with key country stakeholders. Twelve sources were identified, predominately grey literature (n = 9). Thirty interviews were completed with at least 1 key stakeholder from 15 countries. A variety of school food provision activities were identified, including school feeding programs (n = 16, of varying scale), programs covering both school feeding and school gardens (n = 2), school garden programs (n = 12), and other school food provision activities (n = 4, including taste/sensory education, food waste reduction, increasing canteen capacity for local foods, supply chain distribution between local agriculture and schools). Existing links to local agriculture varied for the different programs. Of the 16 school feeding programs, 8 had a requirement for the use of local produce (policy requirement n = 6, traditional requirement from leaders n = 2). Of the 12 school garden programs, 6 used local or traditional produce in the garden and 5 involved local farmers in varying capacities. Challenges to linking local agriculture into school food provision programs were reported for 17 activities and were context dependent. Common challenges included limited funding, inflation, Covid-19, inadequate produce supply for the scale of program, limited farmer capacity, limited institutional support for local produce, low produce storage life, climatic conditions and disasters, water security, delayed procurement process, and limited professional development and upskilling opportunities. Modernisation and colonisation of food systems resulting in a preference for hyperpalatable foods and challenges in incorporating local produce in a way that is accepted by students was also identified as a challenge. This evidence can be used to develop a pathway to piloting and implementing models of school food provision programs and promoting opportunities for shared learning and collaboration with key stakeholders across the Pacific Islands region.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Patients with Fontan failure are high-risk candidates for heart transplantation and other advanced therapies. Understanding the outcomes following initial heart failure consultation can help define appropriate timing of referral for advanced heart failure care.
Methods:
This is a survey study of heart failure providers seeing any Fontan patient for initial heart failure care. Part 1 of the survey captured data on clinical characteristics at the time of heart failure consultation, and Part 2, completed 30 days later, captured outcomes (death, transplant evaluation outcome, and other interventions). Patients were classified as “too late” (death or declined for transplant due to being too sick) and/or “care escalation” (ventricular assist device implanted, inotrope initiated, and/or listed for transplant), within 30 days. “Late referral” was defined as those referred too late and/or had care escalation.
Results:
Between 7/2020 and 7/2022, 77 Fontan patients (52% inpatient) had an initial heart failure consultation. Ten per cent were referred too late (6 were too sick for heart transplantation with one subsequent death, and two others died without heart transplantation evaluation, within 30 days), and 36% had care escalation (21 listed ± 5 ventricular assist device implanted ± 6 inotrope initiated). Overall, 42% were late referrals. Heart failure consultation < 1 year after Fontan surgery was strongly associated with late referral (OR 6.2, 95% CI 1.8–21.5, p=0.004).
Conclusions:
Over 40% of Fontan patients seen for an initial heart failure consultation were late referrals, with 10% dying or being declined for transplant within a month of consultation. Earlier referral, particularly for those with heart failure soon after Fontan surgery, should be encouraged.
To evaluate the impact of treatment provided by a Crisis Resolution Home Treatment Team (CRHTT) in terms of preventing hospital admission, impact on service user’s symptoms and overall functioning, as well as service user’s satisfaction with the service. Secondary objectives were to evaluate the patient characteristics of those attending the CRHTT.
Methods:
All the service users treated by the CRHTT between 2016 and 2020 were included. Service users completed the Brief Psychiatric Rating Scale (BPRS), the Health of the Nation Outcome Scale (HoNOS), and the Client Satisfaction Questionnaire-version 8 (CSQ-8) before and after treatment by the CRHTT. Admission rates were compared between areas served by the CRHTT and control, before and after the introduction of the CRHTT, using two-way ANOVA.
Results:
Between 2016 and 2020, 1041 service users were treated by the service. Inpatient admissions in the areas served by the CRHTT fell by 38.5% after its introduction. There was a statistically significant interaction between CRHTT availability and time on admission rate, F (1,28) = 8.4, p = .007. BPRS scores were reduced significantly (p < .001), from a mean score of 32.01 before treatment to 24.64 after treatment. Mean HoNOS scores were 13.6 before and 9.1 after treatment (p < .001). Of the 1041 service users receiving the CSQ-8, only 180 returned it (17.3%). Service users’ median responses were “very positive” to all eight items on the CSQ-8.
Conclusions:
Although our study design has limitations this paper provides some support that CRHTT might be effective for the prevention of inpatient admission. The study also supports that CRHTT might be an effective option for the treatment of acute mental illness and crisis, although further research is needed in this area.
Crisis Resolution Home Treatment Teams (CRHTTs) offer short-term specialist psychiatric input to service users experiencing acute mental illness or crisis in the community. The South Lee CRHTT was setup in 2015.
Objectives: Primary objectives
To evaluate the impact of treatment given by a CRHTT in terms of:
1. Preventing hospital admission,
2. Impact on service user’s symptoms and overall functioning
3. Service user’s satisfaction with the service
Secondary Objectives
To evaluate patient characteristics of those attending the CRHTT, and to assess qualitative data provided by service users using thematic analysis.
Methods
All the service users treated by South Lee CRHTT between 2016-2020 were included in this review. Standardized quantitative measures are routinely taken by the South Lee HBTT before and after treatment. The Brief Psychiatric Rating Scale (BPRS) was used to measure symptom reduction, and the Health of the Nation Outcome Scale (HONOS) was used to measure quality of life/health outcomes. The Client Satisfaction Questionnaire- version 8 (CSQ-8) was used to evaluate service user satisfaction quantitatively, and service users were also asked for qualitative data.
Results
1041 service users were treated by the service, between 2016-2020. Treatment by the CRHTT was shown to be effective across all primary outcome measures. Inpatient admissions in the areas served by the CRHTT fell by 38.5% after its introduction. BPRS scores were reduced significantly (p<.001), from a mean score of 32.01 to 24.64 before and after treatment. Mean HoNOS scores were 13.6 before and 9.1 after treatment (p<.001). Of the 1041 service users receiving the CSQ-8, 180 returned it completed (17.3%). Service users’ median responses were “very positive” on a 4 point-Likert scale to all 8 items on the CSQ-8, and qualitative data were thematically analysed.
Conclusions
CRHT was shown to be effective at preventing inpatient admission. CRHT was shown to be an effective option for the treatment of acute mental illness and crisis, using quantitative measures. Feedback gained from service users suggests that overall patient satisfaction with the CRHTT service was high.
Brushtail possums, Trichosurus vulpecula, are New Zealand's most serious vertebrate pest, facilitating the spread of bovine tuberculosis to livestock, and causing severe damage to native flora and fauna. Possum control has become a national research priority, involving the use of large numbers of captive possums. Successful adaptation of these animals to captivity is important for the welfare of the possums and for the validity of experimental results. The objective of this study was to determine, by behavioural means, the time individually caged possums required for adaptation to captivity. We used a simple behavioural measure - a possum's daily response to a caregiver at feeding (a feeding test) - to assess changes in the behaviour of possums after arrival in captivity. We also recorded changes in possum body weight throughout this period. Initially most possums ‘avoided’ the caregiver, but within 7 days more than 80 per cent of animals no longer avoided. ‘In den’ and ‘approach’ behaviour rapidly increased for the first 14 days in captivity, after which den use became less common as more possums ‘approached’ the caregiver. By day 29 of captivity, more than 80 per cent of the possums ‘approached’ the caregiver. The possums’ body weight did not change significantly during the first 14 days in captivity, but had increased significantly by day 28, and continued to increase for at least 6 weeks after capture. These data suggest that most possums adapt to captivity within 4 weeks. For the welfare of possums and the reliability of experimental results, we recommend that possums are not used in experiments until at least 4 weeks after capture.
Considerable literature has examined the COVID-19 pandemic’s negative mental health sequelae. It is recognised that most people experiencing mental health problems present to primary care and the development of interventions to support GPs in the care of patients with mental health problems is a priority. This review examines interventions to enhance GP care of mental health disorders, with a view to reviewing how mental health needs might be addressed in the post-COVID-19 era.
Methods:
Five electronic databases (PubMed, PsycINFO, Cochrane Library, Google Scholar and WHO ‘Global Research on COVID-19’) were searched from May – July 2021 for papers published in English following Arksey and O’Malley’s six-stage scoping review process.
Results:
The initial search identified 148 articles and a total of 29 were included in the review. These studies adopted a range of methodologies, most commonly randomised control trials, qualitative interviews and surveys. Results from included studies were divided into themes: Interventions to improve identification of mental health disorders, Interventions to support GPs, Therapeutic interventions, Telemedicine Interventions and Barriers and Facilitators to Intervention Implementation. Outcome measures reported included the Seven-item Generalised Anxiety Disorder Scale (GAD-7), the Nine-item Patient Health Questionnaire (PHQ-9) and the ‘The Patient Global Impression of Change Scale’.
Conclusion:
With increasing recognition of the mental health sequelae of COVID-19, there is a lack of large scale trials researching the acceptability or effectiveness of general practice interventions. Furthermore there is a lack of research regarding possible biological interventions (psychiatric medications) for mental health problems arising from the pandemic.
The late Holocene Bonneville landslide, a 15.5 km2 rockslide-debris avalanche, descended 1000 m from the north side of the Columbia River Gorge and dammed the Columbia River where it bisects the Cascade Range of Oregon and Washington, USA. The landslide, inundation, and overtopping created persistent geomorphic, ecologic, and cultural consequences to the river corridor, reported by Indigenous narratives and explorer accounts, as well as scientists and engineers. From new dendrochronology and radiocarbon dating of three trees killed by the landslide, one entrained and buried by the landslide and two killed by rising water in the impounded Columbia River upstream of the blockage, we find (1) the two drowned trees and the buried tree died the same year, and (2) the age of tree death, and hence the landslide (determined by combined results of nine radiocarbon analyses of samples from the three trees), falls within AD 1421–1455 (3σ confidence interval). This result provides temporal context for the tremendous physical, ecological, and cultural effects of the landslide, as well as possible triggering mechanisms. The age precludes the last Cascadia Subduction Zone earthquake of AD 1700 as a landslide trigger, but not earlier subduction zone or local crustal earthquakes.