We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Low vegetable intake is a key contributor to the health burden experienced by young adults in rural communities(1). Digital interventions provide an accessible delivery model that can be personalised to meet the diverse preferences of young adults(2). This study aimed to determine the feasibility, acceptability and efficacy of a personalised digital intervention to increase vegetable intake (Veg4Me), co-designed to meet the needs of young adults living in rural Australian communities(3). A 12-week assessor-blinded, two-arm, parallel randomised controlled trial was undertaken from August 2023 until April 2024. Young adults (18–35 years; consuming < 5 serves of vegetables/day; with an internet-connected device) living in Loddon Campaspe or Colac Otway Shire in Victoria, Australia, were recruited via social media and local government networks. Participants were randomised to receive 12 weeks of personalised (intervention) or non-personalised (control) support via a free web application (app; Veg4Me). Key features included 1) recipes personalised to users’ dietary and cooking preferences, 2) geo-located food environment map, 3) healthy eating resources, 4) goal-setting portal and 5) personalised e-newsletters. The primary outcome was feasibility: recruitment, participation and retention rate. Secondary outcomes were usability and user experience, perceived changed in vegetable intake, self-reported vegetable intake, and confidence to cook fresh green and root vegetables. Regression analyses (adjusted for baseline) were used to test for significant differences between groups. A total of n = 536 individuals registered on the Veg4Me website. After excluding fraudulent and duplicate responses (n = 289), n = 124 were eligible and provided consent to participate, n = 116 were randomised and n = 83 completed postintervention data collection. The recruitment rate was 47%, participation rate was 93% and retention rate was 72%. Compared to the control, more intervention participants were satisfied with Veg4Me (76% vs 52%). Most intervention participants reported that access to personalised recipes gave them confidence to eat a wider variety of vegetables (83%), while 76% accessed the food environment map, 63% accessed the healthy eating resources, 78% accessed the goal-setting function and 90% reported that the e-newsletters prompted them to access Veg4Me. Compared to the control, more intervention participants perceived that their vegetable intake had changed in the last 12 weeks (85% vs 57%; p = 0.013). Mean vegetable intake at 12 weeks in intervention and control participants was 2.7 (SD 1.0) and 2.7 (SD 1.4) serves/day, respectively (p = 0.67). Confidence to cook fresh green vegetables at 12 weeks in intervention and control participants was 93% and 91%, respectively (p = 0.24), while for root vegetables this was 88% and 81%, respectively (p = 0.11). Findings demonstrate the feasibility and acceptability of the Veg4Me intervention, and some evidence of efficacy. This study introduces a new strategy that has promise for addressing diet and health inequities experienced by young adults living in rural communities.
The aim of this study was to determine whether there was a significant change in cardiac [123I]-metaiodobenzylguanidine uptake between baseline and follow-up in individuals with mild cognitive impairment with Lewy bodies (MCI-LB) who had normal baseline scans. Eight participants with a diagnosis of probable MCI-LB and a normal baseline scan consented to a follow-up scan between 2 and 4 years after baseline. All eight repeat scans remained normal; however, in three cases uptake decreased by more than 10%. The mean change in uptake between baseline and repeat was −5.2% (range: −23.8% to +7.0%). The interpolated mean annual change in uptake was −1.6%.
Attendance at university can result in social support network disruption. This can have a negative impact on the mental health of young people. Demand for mental health support continues to increase in universities, making identification of factors associated with poorer outcomes a priority. Although social functioning has a bi-directional relationship with mental health, its association with effectiveness of psychological treatments has yet to be explored.
Objectives
To explore whether students showing different trajectories of change in social function over the course of treatment differed in eventual treatment outcome.
Methods
Growth mixture models were estimated on a sample of 5221 students treated in routine mental health services. Different trajectories of change in self-rated impairment in social leisure activities and close relationships (Work and Social Adjustment Scale (WSAS) items 3 and 5) during the course of treatment were identified. Associations between trajectory classes and treatment outcomes were explored through multinomial regression.
Results
Five trajectory classes were identified for social leisure activity impairment (Figure 1), and three classes were identified for close relationship impairment (Figure 2). For both measures the majority of students remained mildly impaired (Class 1). Other trajectories included severe impairment with limited improvement (Class 2), severe impairment with delayed improvement (Class 3), and, in social leisure activities only, rapid improvement (Class 4), and deterioration (Class 5). There was an association between trajectories of improvement in social functioning over time and positive treatment outcomes. Trajectories of worsening or stable severe impairment were associated with negative treatment outcomes.
Image:
Image 2:
Conclusions
Changes in social functioning impairment are associated with psychological treatment outcomes in students, suggesting that these changes may be associated with treatment effectiveness or recovery experiences. Future research should look to establish whether a causal link exists to understand if additional benefit for students can be gained through integrating social support within psychological treatment.
The diagnosis of functional constipation (FC) relies on patient-reported outcomes evaluated as criteria based on the clustering of symptoms. Although the ROME IV criteria for FC diagnosis is relevant for a multicultural population(1), how an individual’s lifestyle, environment and culture may influence the pathophysiology of FC remains a gap in our knowledge. Building on insights into mechanisms underpinning disorders of gut-brain interactions (formerly functional gastrointestinal disorders) in the COMFORT Cohort(2), this study aimed to investigate the differences in gastrointestinal (GI) symptom scores among participants with FC in comparison to healthy controls between Chinese and non-Chinese New Zealanders. The Gastrointestinal Understanding of Functional Constipation In an Urban Chinese and Urban non-Chinese New Zealander Cohort (GUTFIT) study was a longitudinal cohort study, which aimed to determine a comprehensive profile of characteristics and biological markers of FC between Chinese and non-Chinese New Zealanders. Chinese (classified according to maternal and paternal ethnicity) or non-Chinese (mixed ethnicities) adults living in Auckland classified as with or without FC based on ROME IV were enrolled. Monthly assessment (for 3 months) of GI symptoms, anthropometry, quality of life, diet, and biological samples were assessed monthly over March to June 2023. Demographics were obtained through a self-reported questionnaires and GI symptoms were assessed using the Gastrointestinal Symptom Rating Scale (GSRS) and Structured Assessment of Gastrointestinal Symptoms Scale (SAGIS). This analysis is a cross-sectional assessment of patient-reported outcomes of GI symptoms. Of 78 enrolled participants, 66 completed the study (male, n = 10; female, n = 56) and were distributed across: Chinese with FC (Ch-FC; n = 11), Chinese control (Ch-CON; n = 19), non-Chinese with FC (NCh-FC; n = 16), non-Chinese control (NCh-CON; n = 20). Mean (SD) age, body mass index, and waist circumference were 40 ± 9 years, 22.7 ± 2.5 kg/m2, and 78.0 ± 7.6 cm, respectively. Ethnicity did not impact SAGIS domain scores for GI symptoms (Ethnicity x FC severity interaction p>0.05). Yet, the constipation symptoms domain of the GSRS was scored differently depending on ethnicity and FC status (Ethnicity x FC interaction p<0.05). In post hoc comparison, NCh-FC tended to have higher GSRS constipation severity scores than Ch-FC (3.4 ± 1.0 versus 3.8 ± 0.8 /8, p<0.1) Although constipation symptom severity tended to be higher in NCh-FC, on the whole, ethnicity did not explain variation in this cohort. FC status was a more important predictor of GI symptoms scores. Future research will assess differences in symptom burden to explore ethnicity-specific characteristics of FC.
Distinct pathophysiology has been identified with disorders of gut-brain interactions (DGBI), including functional constipation (FC)(1,2), yet the causes remain unclear. Identifying how modifiable factors (i.e., diet) differ depending on gastrointestinal health status is important to understand relationships between dietary intake, pathophysiology, and disease burden of FC. Given that dietary choices are culturally influenced, understanding ethnicity-specific diets of individuals with FC is key to informing appropriate symptom management and prevention strategies. Despite distinct genetic and cultural features of Chinese populations with increasing FC incidence(3), DGBI characteristics are primarily described in Caucasian populations(2). We therefore aimed to identify how dietary intake of Chinese individuals with FC differs to non-Chinese individuals with FC, relative to healthy controls. The Gastrointestinal Understanding of Functional Constipation In an Urban Chinese and Urban non-Chinese New Zealander Cohort (GUTFIT) study was a longitudinal case-control study using systems biology to investigate the multi-factorial aetiology of FC. Here we conducted a cross-sectional dietary intake assessment, comparing Chinese individuals with FC (Ch-FC) against three control groups: a) non-Chinese with FC (NCh-FC) b) Chinese without FC (Ch-CON) and c) non-Chinese without FC (NCh-CON). Recruitment from Auckland, New Zealand (NZ) identified Chinese individuals based on self-identification alongside both parents self-identifying as Chinese, and FC using the ROME IV criteria. Dietary intake was captured using 3-day food diaries recorded on consecutive days, including one weekend day. Nutrient analysis was performed by Foodworks 10 and statistical analysis with SPSS using a generalised linear model (ethnicity and FC status as fixed factors). Of 78 enrolled participants, 66 completed the study and 64 (39.4 ± 9.2 years) completed a 3-day food diary at the baseline assessment. More participants were female (84%) than male (16%). FC and ethnicity status allocated participants into 1 of 4 groups: Ch-FC (n = 11), Ch-CON (n = 18), NCh-FC (n = 16), NCh-CON (n = 19). Within NCh, ethnicities included NZ European (30%), non-Chinese Asian (11%), Other European (11%), and Latin American (2%). Fibre intake did not differ between Ch-FC and NCh-FC (ethnicity × FC status interaction p>0.05) but was independently lower overall for FC than CON individuals (21.8 ± 8.7 versus 27.0 ± 9.7 g, p<0.05) and overall for Ch than NCh (22.1 ± 8.0 versus 27.0 ± 10.4 g, p<0.05). Carbohydrate, protein, and fat intakes were not different across groups (p>0.05 each, respectively). In the context of fibre and macronutrient intake, there is no difference between Ch-FC and NCh-FC. Therefore, fibre and macronutrients are unlikely to contribute to potential pathophysiological differences in FC between ethnic groups. A more detailed assessment of dietary intake concerning micronutrients, types of fibre, or food choices may be indicated to ascertain whether other dietary differences exist.
Diets low in vegetables are a main contributor to the health burden experienced by Australians living in rural communities. Given the ubiquity of smartphones and access to the Internet, digital interventions may offer an accessible delivery model for a dietary intervention in rural communities. However, no digital interventions to address low vegetable intake have been co-designed with adults living in rural areas(1). This research aims to describe the co-design of a digital intervention to improve vegetable intake with rural community members and research partners. Active participants in the co-design process were adults ≥18 years living in three rural Australian communities (total n = 57) and research partners (n = 4) representing three local rural governments and one peak non-government health organisation. An iterative co-design process(2) was undertaken to understand the needs (pre-design phase) and ideas (generative phase) of the target population through eight online workshops and a 21-item online community survey between July and December 2021. Prioritisation methods were used to help workshop participants identify the ‘Must-have, Should-have, Could-have, and Won’t-have or will not have right now’ (MoSCoW) features and functions of the digital intervention. Workshops were transcribed and inductively analysed using NVivo. Convergent and divergent themes were identified between the workshops and community survey to identify how to implement the digital intervention in the community. Consensus was reached on a concept for a digital intervention that addressed individual and food environment barriers to vegetable intake, specific to rural communities. Implementation recommendations centred on i) food literacy approaches to improve skills via access to vegetable-rich recipes and healthy eating resources, ii) access to personalisation options and behaviour change support, and iii) improving the community food environment by providing information on and access to local food initiatives. Rural-dwelling adults expressed preferences for personalised intervention features that can enhance food literacy and engagement with community food environments. This co-design process will inform the development of a prototype (evaluation phase) and feasibility testing (post-design phase) of this intervention. The resulting intervention is anticipated to reduce barriers and support enablers, across individual and community levels, to facilitate higher consumption of vegetables among rural Australians. These outcomes have the potential to contribute to improved wellbeing in the short term and reduced chronic disease risk in the long term, decreasing public health inequities.
Previous research has found that measures of premorbid intellectual functioning may be predictive of performance on memory tasks among older adults (Duff, 2010). Intellectual functioning itself is correlated with education. The purpose of this study was to investigate the incremental validity of a measure of premorbid intellectual functioning over education levels to predict performance on the Virtual Environment Grocery Store (VEGS), which involves a simulated shopping experience assessing learning, memory, and executive functioning.
Participants and Methods:
Older adults (N = 118, 60.2% female, age 60-90, M = 73.51, SD = 7.46) completed the Wechsler Test of Adult Reading and the VEGS.
Results:
WTAR and education level explained 9.4% of the variance in VEGS long delay free recall, F = 5.97, p = 0.003). WTAR was a significant predictor (ß = 0.25, p = 0.006), while level of education was not.
Conclusions:
These results suggest that crystalized intelligence may benefit recall on a virtual reality shopping task.
This study aimed to map the maturity of precision oncology as an example of a Learning Health System by understanding the current state of practice, tools and informatics, and barriers and facilitators of maturity.
Methods:
We conducted semi-structured interviews with 34 professionals (e.g., clinicians, pathologists, and program managers) involved in Molecular Tumor Boards (MTBs). Interviewees were recruited through outreach at 3 large academic medical centers (AMCs) (n = 16) and a Next Generation Sequencing (NGS) company (n = 18). Interviewees were asked about their roles and relationships with MTBs, processes and tools used, and institutional practices. The interviews were then coded and analyzed to understand the variation in maturity across the evolving field of precision oncology.
Results:
The findings provide insight into the present level of maturity in the precision oncology field, including the state of tooling and informatics within the same domain, the effects of the critical environment on overall maturity, and prospective approaches to enhance maturity of the field. We found that maturity is relatively low, but continuing to evolve, across these dimensions due to the resource-intensive and complex sociotechnical infrastructure required to advance maturity of the field and to fully close learning loops.
Conclusion:
Our findings advance the field by defining and contextualizing the current state of maturity and potential future strategies for advancing precision oncology, providing a framework to examine how learning health systems mature, and furthering the development of maturity models with new evidence.
Remitted psychotic depression (MDDPsy) has heterogeneity of outcome. The study's aims were to identify subgroups of persons with remitted MDDPsy with distinct trajectories of depression severity during continuation treatment and to detect predictors of membership to the worsening trajectory.
Method
One hundred and twenty-six persons aged 18–85 years participated in a 36-week randomized placebo-controlled trial (RCT) that examined the clinical effects of continuing olanzapine once an episode of MDDPsy had remitted with sertraline plus olanzapine. Latent class mixed modeling was used to identify subgroups of participants with distinct trajectories of depression severity during the RCT. Machine learning was used to predict membership to the trajectories based on participant pre-trajectory characteristics.
Results
Seventy-one (56.3%) participants belonged to a subgroup with a stable trajectory of depression scores and 55 (43.7%) belonged to a subgroup with a worsening trajectory. A random forest model with high prediction accuracy (AUC of 0.812) found that the strongest predictors of membership to the worsening subgroup were residual depression symptoms at onset of remission, followed by anxiety score at RCT baseline and age of onset of the first lifetime depressive episode. In a logistic regression model that examined depression score at onset of remission as the only predictor variable, the AUC (0.778) was close to that of the machine learning model.
Conclusions
Residual depression at onset of remission has high accuracy in predicting membership to worsening outcome of remitted MDDPsy. Research is needed to determine how best to optimize the outcome of psychotic MDDPsy with residual symptoms.
Changes in abundance and distribution of marine top predators can indicate environmental change or anthropogenic pressure requiring management response. Here, we used an extensive dataset (21 years) to conduct a spatial and temporal analysis of grey seal strandings in Cornwall and the Isles of Scilly, close to the southern edge of the breeding range of the species. A total of 2007 strandings were reported from 2000 to 2020, increasing by 474% from 35 to 201 individuals per year during this period. The continued rise in strandings was consistent across all life stages and timeframes (5, 10 and 20 years), underpinning the suggestion of increasing abundance in the region. The observed seasonality differed by life stage, coinciding with the increased presence of animals near the coast for key life phases such as breeding, moulting and pupping. Strandings are widely distributed across the coast of Cornwall and the Isles of Scilly; however, most strandings were recorded on the north coast of Cornwall (70%) where major pupping and haul out sites are found. Despite hosting several pupping and haul out sites, a small proportion was recorded on the Isles of Scilly (5%) where it is thought that strandings are particularly underreported. Describing baselines in magnitude of strandings and life-stage compositions across space and time allows future deviations in frequency, demographic composition or spatial distribution to be detected and investigated. We demonstrate the utility of long-term citizen science data to provide valuable and cost-effective information on the distribution and abundance of a highly mobile marine mammal.
Paliperidone 3-monthly (PP3M) is a long-acting injectable antipsychotic (LAI) which has been shown to be an equally effective and more convenient alternative to Paliperidone 1-monthly (PP1M) (Hope et al. Australas Psychiatry 2018;26(2):206-209). A prerequisite for PP3M use is stability on a consistent dosing of PP1M ≥4 months, though, few studies have so far explored patients’ experiences with switching.
Objectives
The aim of the study was to assess satisfaction and perspectives following the change to PP3M. A safety question with regards to the Covid-19 was also included.
Methods
This cross-sectional survey was performed within a large, urban mental health setting between May-June 2021 while the UK was still under Covid-19 restrictions. Two psychiatrists obtained verbal consent before administering the survey. Questions 1 and 2 focused on satisfaction and safety with respondents rating to what extent they agreed or disagreed using a 5-point Likert scale. Questions 3 and 4 focused on advantages and disadvantages of the medication change; suggested answers were supplied but there was also an option to provide additional responses. Additional demographic and clinical information were collected from the electronic records.
Results
Of the 61 patients who were receiving PP3M at the time of the survey 46 (31 male and 15 female) agreed to participate. One declined to participate, while 14 were not contactable, making the response rate 98% (46/47).
89.5% of respondents strongly agreed or agreed that they were satisfied after switching, 6.5% neither agreed nor disagreed and 4% disagreed. The bulk of the respondents (93.5%) strongly agreed or agreed that they felt safer having their injection every 3 months during the Covid-19 pandemic. 6.5% neither agreed nor disagreed but no one disagreed with this statement.
Questions on whether patients experienced any advantages or disadvantages as a result of the switch allowed for multiple answers. Convenience (93.5%), was the most popular positive reply, followed by improved quality of life (59%), decreased stigma (39%), better adherence (28%) and improved tolerability (21.7%). While 6.5% did not experience any advantages, 93.5% did not encounter any disadvantages, with 4.3% reporting worsening or new side effects and 2.2% a relapse of symptoms.
Conclusions
The overall experience of switching to PP3M was positive. Similar to two previous studies (Pungor et al. BMC Psychiatry. 2021; 21, 300; Rise et al. Nord. J. Psychiatry 2021;75(4): 257-265) the majority of patients favoured the change quoting convenience, quality of life and reduced stigma as potential benefits. The importance of enhanced safety with less frequent medication administration under pandemic conditions was also highlighted. Shared and supported decision making should further inform clinical practice (Pappa et al. Community Ment Health J. 2021;57(8):1566–1578).
Research is increasingly conducted through multi-institutional consortia, and best practices for establishing multi-site research collaborations must be employed to ensure efficient, effective, and productive translational research teams. In this manuscript, we describe how the Population-based Research to Optimize the Screening Process Lung Research Center (PROSPR-Lung) utilized evidence-based Science of Team Science (SciTS) best practices to establish the consortium’s infrastructure and processes to promote translational research in lung cancer screening. We provide specific, actionable examples of how we: (1) developed and reinforced a shared mission, vision, and goals; (2) maintained a transparent and representative leadership structure; (3) employed strong research support systems; (4) provided efficient and effective data management; (5) promoted interdisciplinary conversations; and (6) built a culture of trust. We offer guidance for managing a multi-site research center and data repository that may be applied to a variety of settings. Finally, we detail specific project management tools and processes used to drive collaboration, efficiency, and scientific productivity.
Mental health needs and disparities are widespread and have been exacerbated by the COVID-19 pandemic, with the greatest burden being on marginalized individuals worldwide. The World Health Organization developed the Mental Health Gap Action Programme to address growing global mental health needs by promoting task sharing in the delivery of psychosocial and psychological interventions. However, little is known about the training needed for non-specialists to deliver these interventions with high levels of competence and fidelity. This article provides a brief conceptual overview of the evidence concerning the training of non-specialists carrying out task-sharing psychosocial and psychological interventions while utilizing illustrative case studies from Kenya, Ethiopia, and the United States to highlight findings from the literature. In this article, the authors discuss the importance of tailoring training to the skills and needs of the non-specialist providers and their roles in the delivery of an intervention. This narrative review with four case studies advocates for training that recognizes the expertise that non-specialist providers bring to intervention delivery, including how they promote culturally responsive care within their communities.
Adults who had non-edematous severe acute malnutrition (SAM) during infancy (i.e., marasmus) have worse glucose tolerance and beta-cell function than survivors of edematous SAM (i.e., kwashiorkor). We hypothesized that wasting and/or stunting in SAM is associated with lower glucose disposal rate (M) and insulin clearance (MCR) in adulthood.
We recruited 40 nondiabetic adult SAM survivors (20 marasmus survivors (MS) and 20 kwashiorkor survivors (KS)) and 13 matched community controls. We performed 150-minute hyperinsulinaemic, euglycaemic clamps to estimate M and MCR. We also measured serum adiponectin, anthropometry, and body composition. Data on wasting (weight-for-height) and stunting (height-for-age) were abstracted from the hospital records.
Children with marasmus had lower weight-for-height z-scores (WHZ) (−3.8 ± 0.9 vs. −2.2 ± 1.4; P < 0.001) and lower height-for-age z-scores (HAZ) (−4.6 ± 1.1 vs. −3.4 ± 1.5; P = 0.0092) than those with kwashiorkor. As adults, mean age (SD) of participants was 27.2 (8.1) years; BMI was 23.6 (5.0) kg/m2. SAM survivors and controls had similar body composition. MS and KS and controls had similar M (9.1 ± 3.2; 8.7 ± 4.6; 6.9 ± 2.5 mg.kg−1.min−1 respectively; P = 0.3) and MCR. WHZ and HAZ were not associated with M, MCR or adiponectin even after adjusting for body composition.
Wasting and stunting during infancy are not associated with insulin sensitivity and insulin clearance in lean, young, adult survivors of SAM. These data are consistent with the finding that glucose intolerance in malnutrition survivors is mostly due to beta-cell dysfunction.
Anxiety and depression are leading causes of disability worldwide, yet individuals are often unable to access appropriate treatment. There is a need to develop effective interventions that can be delivered remotely. Previous research has suggested that emotional processing biases are a potential target for intervention, and these may be altered through brief training programs.
Methods
We report two experimental medicine studies of emotional bias training in two samples: individuals from the general population (n = 522) and individuals currently taking antidepressants to treat anxiety or depression (n = 212). Participants, recruited online, completed four sessions of EBT from their own home. Mental health and cognitive functioning outcomes were assessed at baseline, immediately post-training, and at 2-week follow-up.
Results
In both studies, our intervention successfully trained participants to perceive ambiguous social information more positively. This persisted at a 2-week follow-up. There was no clear evidence that this change in emotional processing transferred to improvements in symptoms in the primary analyses. However, in both studies, there was weak evidence for improved quality of life following EBT amongst individuals with more depressive symptoms at baseline. No clear evidence of transfer effects was observed for self-reported daily stress, anhedonia or depressive symptoms. Exploratory analyses suggested that younger participants reported greater treatment gains.
Conclusions
These studies demonstrate the effectiveness of delivering a multi-session online training program to promote lasting cognitive changes. Given the inconsistent evidence for transfer effects, EBT requires further development before it can be considered as a treatment for anxiety and depression.
Effective mentoring is a key mechanism propelling successful research and academic careers, particularly for early career scholars. Most mentoring programs focus on models pairing senior and early career researchers, with limited focus on peer mentoring. Peer mentoring may be especially advantageous within emerging areas such as implementation science (IS) where challenges to traditional mentoring may be more prevalent. This special communication highlights the value of peer mentoring by describing a case study of an early career IS peer mentoring group. We delineate our curriculum and structure; support and processes; and products and outcomes. We highlight important group member characteristics to consider during group formation and continuation. The group’s long-term (6 years) success was attributed to the balance of similarities and differences among group members. Members were in a similar career phase and used similar methodologies but studied different health topics at different institutions. Group members gave and received instrumental and psychosocial support and shared resources and knowledge. Peer mentoring can serve an important function to provide emotional, logistical, and professional development support for early career scholars. Our case study highlights strategies to foster peer mentoring groups that provide a generalizable blueprint and opportunity for improved outcomes for early career professionals.
Little is known about the relationship between psychomotor disturbance (PMD) and treatment outcome of psychotic depression. This study examined the association between PMD and subsequent remission and relapse of treated psychotic depression.
Methods
Two hundred and sixty-nine men and women aged 18–85 years with an episode of psychotic depression were treated with open-label sertraline plus olanzapine for up to 12 weeks. Participants who remained in remission or near-remission following an 8-week stabilization phase were eligible to participate in a 36-week randomized controlled trial (RCT) that compared the efficacy and tolerability of sertraline plus olanzapine (n = 64) with sertraline plus placebo (n = 62). PMD was measured with the psychiatrist-rated sign-based CORE at acute phase baseline and at RCT baseline. Spearman's correlations and logistic regression analyses were used to analyze the association between CORE total score at acute phase baseline and remission/near-remission and CORE total score at RCT baseline and relapse.
Results
Higher CORE total score at acute phase baseline was associated with lower frequency of remission/near-remission. Higher CORE total score at RCT baseline was associated with higher frequency of relapse, in the RCT sample as a whole, as well as in each of the two randomized groups.
Conclusions
PMD is associated with poorer outcome of psychotic depression treated with sertraline plus olanzapine. Future research needs to examine the neurobiology of PMD in psychotic depression in relation to treatment outcome.
The lack of radiation knowledge among the general public continues to be a challenge for building communities prepared for radiological emergencies. This study applied a multi-criteria decision analysis (MCDA) to the results of an expert survey to identify priority risk reduction messages and challenges to increasing community radiological emergency preparedness.
Methods:
Professionals with expertise in radiological emergency preparedness, state/local health and emergency management officials, and journalists/journalism academics were surveyed following a purposive sampling methodology. An MCDA was used to weight criteria of importance in a radiological emergency, and the weighted criteria were applied to topics such as sheltering-in-place, decontamination, and use of potassium iodide. Results were reviewed by respondent group and in aggregate.
Results:
Sheltering-in-place and evacuation plans were identified as the most important risk reduction measures to communicate to the public. Possible communication challenges during a radiological emergency included access to accurate information; low levels of public trust; public knowledge about radiation; and communications infrastructure failures.
Conclusions:
Future assessments for community readiness for a radiological emergency should include questions about sheltering-in-place and evacuation plans to inform risk communication.
Higher lifetime antipsychotic exposure has been associated with poorer cognition in schizophrenia. The cognitive effects of adjunctive psychiatric medications and lifetime trends of antipsychotic use remain largely unclear. We aimed to study how lifetime and current benzodiazepine and antidepressant medications, lifetime trends of antipsychotic use and antipsychotic polypharmacy are associated with cognitive performance in midlife schizophrenia.
Methods:
Sixty participants with DSM-IV schizophrenia from the Northern Finland Birth Cohort 1966 were examined at 43 years of age with an extensive cognitive test battery. Cumulative lifetime and current use of psychiatric medications were collected from medical records and interviews. The associations between medication and principal component analysis-based cognitive composite score were analysed using linear regression.
Results:
Lifetime cumulative DDD years of benzodiazepine and antidepressant medications were not significantly associated with global cognition. Being without antipsychotic medication (for minimum 11 months) before the cognitive examination was associated with better cognitive performance (P = 0.007) and higher lifetime cumulative DDD years of antipsychotics with poorer cognition (P = 0.020), when adjusted for gender, onset age and lifetime hospital treatment days. Other lifetime trends of antipsychotic use, such as a long antipsychotic-free period earlier in the treatment history, and antipsychotic polypharmacy, were not significantly associated with cognition.
Conclusions:
Based on these naturalistic data, low exposure to adjunctive benzodiazepine and antidepressant medications does not seem to affect cognition nor explain the possible negative effects of high dose long-term antipsychotic medication on cognition in schizophrenia.
A robust biomedical informatics infrastructure is essential for academic health centers engaged in translational research. There are no templates for what such an infrastructure encompasses or how it is funded. An informatics workgroup within the Clinical and Translational Science Awards network conducted an analysis to identify the scope, governance, and funding of this infrastructure. After we identified the essential components of an informatics infrastructure, we surveyed informatics leaders at network institutions about the governance and sustainability of the different components. Results from 42 survey respondents showed significant variations in governance and sustainability; however, some trends also emerged. Core informatics components such as electronic data capture systems, electronic health records data repositories, and related tools had mixed models of funding including, fee-for-service, extramural grants, and institutional support. Several key components such as regulatory systems (e.g., electronic Institutional Review Board [IRB] systems, grants, and contracts), security systems, data warehouses, and clinical trials management systems were overwhelmingly supported as institutional infrastructure. The findings highlighted in this report are worth noting for academic health centers and funding agencies involved in planning current and future informatics infrastructure, which provides the foundation for a robust, data-driven clinical and translational research program.