We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Diffusion decision models are widely used to characterize the cognitive and neural processes involved in making rapid decisions about objects and events in the environment. These decisions, which are made hundreds of times a day without prolonged deliberation, include recognition of people and things as well as real-time decisions made while walking or driving. Diffusion models assume that the processes involved in making such decisions are noisy and variable and that noisy evidence is accumulated until there is enough for a decision. This volume provides the first comprehensive treatment of the theory, mathematical foundations, numerical methods, and empirical applications of diffusion process models in psychology and neuroscience. In addition to the standard Wiener diffusion model, readers will find a detailed, unified treatment of the cognitive theory and the neural foundations of a variety of dynamic diffusion process models of two-choice, multiple choice, and continuous outcome decisions.
There is a significant mortality gap between the general population and people with psychosis. Completion rates of regular physical health assessments for cardiovascular risk in this group are suboptimal. Point-of-care testing (POCT) for diabetes and hyperlipidaemia – providing an immediate result from a finger-prick – could improve these rates.
Aims
To evaluate the impact on patient–clinician encounters and on physical health check completion rates of implementing POCT for cardiovascular risk markers in early intervention in psychosis (EIP) services in South East England.
Method
A mixed-methods, real-world evaluation study was performed, with 40 POCT machines introduced across EIP teams in all eight mental health trusts in South East England from March to May 2021. Clinician training and support was provided. Numbers of completed physical health checks, HbA1c and lipid panel blood tests completed 6 and 12 months before and 6 months after introduction of POCT were collected for individual patients. Data were compared with those from the South West region, which acted as a control. Clinician questionnaires were administered at 2 and 8 months, capturing device usability and impacts on patient interactions.
Results
Post-POCT, South East England saw significant increases in HbA1c testing (odds ratio 2.02, 95% CI 1.17–3.49), lipid testing (odds ratio 2.38, 95% CI 1.43–3.97) and total completed health checks (odds ratio 3.61, 95% CI 1.94–7.94). These increases were not seen in the South West. Questionnaires revealed improved patient engagement, clinician empowerment and patients’ preference for POCT over traditional blood tests.
Conclusions
POCT is associated with improvements in the completion and quality of physical health checks, and thus could be a tool to enhance holistic care for individuals with psychosis.
Objectives/Goals: Research suggests that veterans identifying as Black, Hispanic/Latinx and multiracial may be at higher risk for developing posttraumatic stress disorder (PTSD). The aim of the current study was to compare PTSD treatment outcomes across racial/ethnic veteran groups. Methods/Study Population: Data from 862 veterans who participated in a 2-week cognitive processing therapy (CPT)-based intensive PTSD treatment program were evaluated. Veterans were on average 45.2 years old and 53.8% identified as male. Overall, 64.4% identified as White, Non-Hispanic/ Latino; 17.9% identified as Black, Indigenous, and People of Color (BIPOC), Non-Hispanic/Latino; and 17.7% identified as Hispanic/Latino. PTSD (PCL-5) and depression (PHQ-9) were collected at intake, completion, and at 3-month follow up. A Bayes factor approach was used to examine whether PTSD, and depression outcomes would be noninferior for BIPOC and Hispanic/Latino groups compared to White, Non-Hispanic veterans over time. Results/Anticipated Results: PTSD severity decreased for the White, BIPOC, and Hispanic/Latino groups from baseline to 3-month follow-up. The likelihood that BIPOC and Hispanic/Latino groups would have comparable PTSD outcomes was 1.81e+06 to 208.56 times greater than the likelihood that these groups would have worse outcomes than the White, Non-Hispanic veterans. Depression severity values on the PHQ-9 decreased for the White, BIPOC, and Hispanic/Latino groups from baseline to 3-month follow-up. The likelihood that BIPOC and Hispanic/Latino groups would have comparable depression outcomes at treatment completion approached infinity. At 3-month follow-up, likelihood was 1.42e+11 and 3.09e+05, respectively. Discussion/Significance of Impact: Results indicated that White, BIPOC, and Hispanic/ Latino groups experienced similarly large PTSD and depression symptom reductions. This study adds to the growing body of literature examining differences in clinical outcomes across racial/ ethnic groups for PTSD.
Corn (Zea mays L.) is an important crop that contributes to global food security, but understanding how farm management practices and soil health affect corn grain nutrient analysis and therefore human health is lacking. Leveraging Rodale Institute's Farming Systems Trial—a long-term field experiment established in 1981 in Kutztown, PA, USA—this study was conducted to assess the impact of different agricultural management systems on corn grain nutrient profiles in a long-term trial that has resulted in differences in soil health indicators between treatments as a result of long-term management. The main plot factor was two tillage practices (intensive and reduced) and the subplot factor was four cropping systems (non-diversified conventional [nCNV], diversified conventional [dCNV], legume-based organic [ORG-LEG], and manure-based organic [ORG-MNR]). Generally, the levels of amino acids, vitamins, and protein in corn grain were greatest in the ORG-MNR system, followed by the ORG-LEG and dCNV systems, and finally the nCNV system. It is important to consider that the observed difference between the organic and conventionally grown grain could be due to variations in corn hybrids that were used in those systems. However, nutrient composition of corn differed within cropping systems but between management practices (diversified crop rotation and cover cropping) which also contributed to differences in soil health indicators (soil compaction, soil protein, and organic C levels) that may also influence grain nutrient concentrations. With the exception of methionine, nutrient concentration in corn grain was not affected by different tillage regimes. These findings provide novel information on corn grain nutritional quality of organic and conventional cropping systems after long-term management and give insights into how system-specific components affect nutrient composition of corn grain.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
Despite the global expansion of electronic medical record (EMR) systems and their increased integration with artificial intelligence (AI), their utilization in disaster settings remains limited, and few studies have evaluated their implementation. We aimed to evaluate Fast Electronic Medical Record (fEMR), a novel, mobile EMR designed for resource-limited settings, based on user feedback.
Methods
We examined usage data through October 2022 to categorize the nature of its use for disaster response and determine the number of patients served. We conducted interviews with stakeholders and gathered input from clinicians who had experience using fEMR.
Results
Over eight years, fEMR was employed 60 times in 11 countries across four continents by 14 organizations (universities, non-profits, and disaster response teams). This involved 37,500+ patient encounters in diverse settings including migrant camps at the US-Mexico and Poland-Ukraine borders, mobile health clinics in Kenya and Guatemala, and post-earthquake relief in Haiti. User feedback highlighted adaptability, but suggested hardware and workflow improvements.
Conclusion
EMR systems have the potential to enhance healthcare delivery in humanitarian responses, offer valuable data for planning and preparedness, and support measurement of effectiveness. As a simple, versatile EMR system, fEMR has been deployed to numerous disaster response and low-income settings.
To investigate the symptoms of SARS-CoV-2 infection, their dynamics and their discriminatory power for the disease using longitudinally, prospectively collected information reported at the time of their occurrence. We have analysed data from a large phase 3 clinical UK COVID-19 vaccine trial. The alpha variant was the predominant strain. Participants were assessed for SARS-CoV-2 infection via nasal/throat PCR at recruitment, vaccination appointments, and when symptomatic. Statistical techniques were implemented to infer estimates representative of the UK population, accounting for multiple symptomatic episodes associated with one individual. An optimal diagnostic model for SARS-CoV-2 infection was derived. The 4-month prevalence of SARS-CoV-2 was 2.1%; increasing to 19.4% (16.0%–22.7%) in participants reporting loss of appetite and 31.9% (27.1%–36.8%) in those with anosmia/ageusia. The model identified anosmia and/or ageusia, fever, congestion, and cough to be significantly associated with SARS-CoV-2 infection. Symptoms’ dynamics were vastly different in the two groups; after a slow start peaking later and lasting longer in PCR+ participants, whilst exhibiting a consistent decline in PCR- participants, with, on average, fewer than 3 days of symptoms reported. Anosmia/ageusia peaked late in confirmed SARS-CoV-2 infection (day 12), indicating a low discrimination power for early disease diagnosis.
Cover crop residue retention on the soil surface can suppress weeds and improve organic no-till soybean (Glycine max) yield and profitability compared to a tilled system. Appropriate cereal rye (Secale cereale) fall planting date and termination methods in the spring are critical to achieve these benefits. A plot-scale agronomic experiment was carried out from September 2018 to October 2021 in Kutztown, PA, USA to demonstrate the influence of cereal rye planting date (September or October) and mechanical termination method [no-till (I & J roller-crimper, Dawn ZRX roller, and mow-ted) and tilled (plow-cultivate)] on cover crop regrowth density, weed biomass, soybean yield, and economic returns. In one out of three years, the September rye planting accumulated more cover crop biomass than the October planting, but the regrowth of the rye after roller-crimping was greater with this planting date. Cover crop planting date had no effect on total weed biomass and demonstrated varying effects on soybean grain yield and economic returns. The Dawn ZRX roller outperformed the I & J roller-crimper in effectively terminating cover crops, while the I & J roller-crimper demonstrated more uniform weed suppression and led to greater soybean yields over a span of three years. Organic no-till strategies eliminated the need for tillage and reduced variable costs by 14% over plow-cultivated plots, and generated ~19% greater net revenue across the study period (no-till vs tillage = US $845 vs US $711 ha−1). Terminating cereal rye with roller-crimping technology can be a positive investment in an organic soybean production system.
Knowledge graphs have become a common approach for knowledge representation. Yet, the application of graph methodology is elusive due to the sheer number and complexity of knowledge sources. In addition, semantic incompatibilities hinder efforts to harmonize and integrate across these diverse sources. As part of The Biomedical Translator Consortium, we have developed a knowledge graph–based question-answering system designed to augment human reasoning and accelerate translational scientific discovery: the Translator system. We have applied the Translator system to answer biomedical questions in the context of a broad array of diseases and syndromes, including Fanconi anemia, primary ciliary dyskinesia, multiple sclerosis, and others. A variety of collaborative approaches have been used to research and develop the Translator system. One recent approach involved the establishment of a monthly “Question-of-the-Month (QotM) Challenge” series. Herein, we describe the structure of the QotM Challenge; the six challenges that have been conducted to date on drug-induced liver injury, cannabidiol toxicity, coronavirus infection, diabetes, psoriatic arthritis, and ATP1A3-related phenotypes; the scientific insights that have been gleaned during the challenges; and the technical issues that were identified over the course of the challenges and that can now be addressed to foster further development of the prototype Translator system. We close with a discussion on Large Language Models such as ChatGPT and highlight differences between those models and the Translator system.
The absence of clinical information in the aftermath of disasters in resource-constrained environments costs lives. fEMR– fast Electronic Medical Records–is a medical records system designed for mobile clinics and has proven useful in post-disaster settings. While the original version of the system was developed for areas without access to the Internet, a new version of this system was developed in 2019 to accommodate regions with connectivity.
Method:
We reviewed the design, implementation, and usage of fEMR from June 2014 to October 2022. We used logged data of the number of users, patient encounters, and the circumstances of each deployment. We compared usage between the original fEMR system and fEMR-on-chain.
Results:
The original fEMR system was created in an iterative process by students in Computer Science classes at three different American universities. The system creates a closed intranet signal to which clinicians connect their own device to access the software. The hardware is transported to the medical team in a carry-on suitcase prior to deployment. All data are stored on a laptop that acts as a server. The online version, fEMR On-Chain, was developed under a grant, but is sustained in development through academic partnerships. Both versions are designed so that the provider can complete an encounter with as few clicks as possible and with as little input as necessary to identify patients.The original fEMR system has been deployed to mobile clinics worldwide since 2014. The system has about 14,181 patients and 16,021 clinical encounters from 12 different countries. fEMR On-Chain has been deployed to refugee and migrant settings since 2019, containing about 18,000 patients and 22,000 encounters in two different countries.
Conclusion:
Successive versions of the fEMR system have been used in a variety of conditions and settings, with usage accelerating since 2019 in refugee and migrant health centers.
Inferences consistent with “recognition-based” decision-making may be drawn for various reasons other than recognition alone. We demonstrate that, for 2-alternative forced-choice decision tasks, less-is-more effects (reduced performance with additional learning) are not restricted to recognition-based inference but can also be seen in circumstances where inference is knowledge-based but item knowledge is limited. One reason why such effects may not be observed more widely is the dependence of the effect on specific values for the validity of recognition and knowledge cues. We show that both recognition and knowledge validity may vary as a function of the number of items recognized. The implications of these findings for the special nature of recognition information, and for the investigation of recognition-based inference, are discussed.
Considerable heterogeneity exists in treatment response to first-line posttraumatic stress disorder (PTSD) treatments, such as Cognitive Processing Therapy (CPT). Relatively little is known about the timing of when during a course of care the treatment response becomes apparent. Novel machine learning methods, especially continuously updating prediction models, have the potential to address these gaps in our understanding of response and optimize PTSD treatment.
Methods
Using data from a 3-week (n = 362) CPT-based intensive PTSD treatment program (ITP), we explored three methods for generating continuously updating prediction models to predict endpoint PTSD severity. These included Mixed Effects Bayesian Additive Regression Trees (MixedBART), Mixed Effects Random Forest (MERF) machine learning models, and Linear Mixed Effects models (LMM). Models used baseline and self-reported PTSD symptom severity data collected every other day during treatment. We then validated our findings by examining model performances in a separate, equally established, 2-week CPT-based ITP (n = 108).
Results
Results across approaches were very similar and indicated modest prediction accuracy at baseline (R2 ~ 0.18), with increasing accuracy of predictions of final PTSD severity across program timepoints (e.g. mid-program R2 ~ 0.62). Similar findings were obtained when the models were applied to the 2-week ITP. Neither the MERF nor the MixedBART machine learning approach outperformed LMM prediction, though benefits of each may differ based on the application.
Conclusions
Utilizing continuously updating models in PTSD treatments may be beneficial for clinicians in determining whether an individual is responding, and when this determination can be made.
Recently, the Health of the Nation Outcome Scales 65+ (HoNOS65+) were revised. Twenty-five experts from Australia and New Zealand completed an anonymous web-based survey about the content validity of the revised measure, the HoNOS Older Adults (HoNOS OA).
Results
All 12 HoNOS OA scales were rated by most (≥75%) experts as ‘important’ or ‘very important’ for determining overall clinical severity among older adults. Ratings of sensitivity to change, comprehensibility and comprehensiveness were more variable, but mostly positive. Experts’ comments provided possible explanations. For example, some experts suggested modifying or expanding the glossary examples for some scales (e.g. those measuring problems with relationships and problems with activities of daily living) to be more older adult-specific.
Clinical implications
Experts agreed that the HoNOS OA measures important constructs. Training may need to orient experienced raters to the rationale for some revisions. Further psychometric testing of the HoNOS OA is recommended.
Prisons are susceptible to outbreaks. Control measures focusing on isolation and cohorting negatively affect wellbeing. We present an outbreak of coronavirus disease 2019 (COVID-19) in a large male prison in Wales, UK, October 2020 to April 2021, and discuss control measures.
We gathered case-information, including demographics, staff-residence postcode, resident cell number, work areas/dates, test results, staff interview dates/notes and resident prison-transfer dates. Epidemiological curves were mapped by prison location. Control measures included isolation (exclusion from work or cell-isolation), cohorting (new admissions and work-area groups), asymptomatic testing (case-finding), removal of communal dining and movement restrictions. Facemask use and enhanced hygiene were already in place. Whole-genome sequencing (WGS) and interviews determined the genetic relationship between cases plausibility of transmission.
Of 453 cases, 53% (n = 242) were staff, most aged 25–34 years (11.5% females, 27.15% males) and symptomatic (64%). Crude attack-rate was higher in staff (29%, 95% CI 26–64%) than in residents (12%, 95% CI 9–15%).
Whole-genome sequencing can help differentiate multiple introductions from person-to-person transmission in prisons. It should be introduced alongside asymptomatic testing as soon as possible to control prison outbreaks. Timely epidemiological investigation, including data visualisation, allowed dynamic risk assessment and proportionate control measures, minimising the reduction in resident welfare.
Transient ischaemic attack (TIA) can lead to lasting changes in brain structure and function resulting in cognitive impairment. Cognitive screening tools may lack sensitivity for detecting cognitive impairments, particularly executive function, which tends to be the earliest affected domain in vascular cognitive impairment.
Aim:
In this preliminary study, we examine a working memory (WMem) task as a sensitive measure of cognitive impairment in TIA.
Method:
Patients referred to a TIA clinic for transient neurological symptoms completed a general cognitive screening tool (Montreal Cognitive Assessment; MoCA), and a WMem task (2-N-back) in a cross-sectional design.
Results:
TIA patients (n = 12) showed significantly reduced WMem performance on the N-back compared to patients diagnosed with mimic clinical conditions with overlapping symptoms (n = 16). No group differences were observed on the MoCA.
Conclusions:
Assessing WMem may provide a sensitive measure of cognitive impairment after TIA, with implications for cognitive screening in TIA services to triage patients for further neuropsychological support, or for interventions to prevent vascular dementia.
There has been a notable increase in requests for psychiatric reports from District Courts for persons remanded to Ireland’s main remand prison, Cloverhill. We aimed to identify if reports were prepared for persons with severe mental illness and if they led to therapeutic benefits such as diversion to healthcare. Measures of equitability between Cloverhill and other District Courts were explored.
Methods:
For District Court-requested reports completed by the Prison Inreach and Court Liaison Service (PICLS) at Cloverhill Prison from 2015 to 2017, we recorded clinical variables and therapeutic outcomes such as diversion to inpatient psychiatric settings.
Results:
Of 236 cases, over half were diverted to inpatient or outpatient psychiatric care. One-third of remand episodes were admitted to a psychiatric hospital, mainly in non-forensic settings. Nearly two-thirds had major mental illness, mainly schizophrenia and related conditions. Almost half had active psychosis. Cases in Cloverhill District Court and other District Courts were similarly likely to have active psychosis (47% overall) and hospital admission (33% overall). Voluntary reports were more likely to identify active psychosis, with over 90% diverted to inpatient or outpatient community treatment settings.
Conclusions:
This is the first large scale study of diversion outcomes following requests for psychiatric advice from District Courts in Ireland. Requests were mainly appropriate. Over half led to diversion from the criminal justice system to healthcare settings. There is a need for a complementary network of diversion initiatives at every stage of the criminal justice system to effectively divert mentally ill individuals to appropriate settings at the earliest possible stage.
Public representations of long-term residential care (LTRC) facilities have received limited focus in Canada, although literature from other countries indicates that public perceptions of LTRC tend to be negative, particularly in contexts that prioritize aging and dying in place. Using Manitoba as the study context, we investigate a question of broad relevance to the Canadian perspective; specifically, what are current public perceptions of the role and function of long-term care in the context of a changing health care system? Through critical discourse analysis, we identify four overarching discourses dominating public perceptions of LTRC: the problem of public aging, LTRC as an imperfect solution to the problem, LTRC as ambiguous social spaces, and LTRC as a last resort option. Building on prior theoretical work, we suggest that public perceptions of LTRC are informed by neoliberal discourses that privilege individual responsibility and problematize public care.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
Aims
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Method
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Results
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
Conclusions
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
A significant number of people with autism require in-patient psychiatric care. Although the requirement to adequately meet the needs of people with autism in these settings is enshrined in UK law and supported by national guidelines, little information is available on current practice.
Aims
To describe characteristics of UK in-patient psychiatric settings admitting people with autism. Also to examine psychiatric units for their suitability, and the resultant impact on admission length and restrictive interventions.
Method
Multiple-choice questions about in-patient settings and their ability to meet the needs of people with autism and the impact on their outcomes were developed as a cross-sectional study co-designed with a national autism charity. The survey was distributed nationally, using an exponential and non-discriminatory snowballing technique, to in-patient unit clinicians to provide a current practice snapshot.
Results
Eighty responses were analysed after excluding duplications, from across the UK. Significant variation between units across all enquired parameters exist. Lack of autism-related training and skills across staff groups was identified, this becoming disproportionate when comparing intellectual disability units with general mental health units particularly regarding psychiatrists working in these units (psychiatrists: 94% specialist skills in intellectual disability units versus 6% specialist skills in general mental health units). In total, 28% of survey respondents felt people with autism are more likely to be subject to seclusion and 40% believed in-patients with autism are likely to end in segregation.
Conclusions
There is no systematic approach to supporting people with autism who are admitted to in-patient psychiatric units. Significant concerns are highlighted of lack of professional training and skill sets resulting in variable clinical practice and care delivery underpinned by policy deficiency. This could account for the reported in-patient outcomes of longer stay and segregation experienced by people with autism.