We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
New Zealand and Australian governments rely heavily on voluntary industry initiatives to improve population nutrition, such as voluntary front-of-pack nutrition labelling (Health Star Rating [HSR]), industry-led food advertising standards, and optional food reformulation programmes. Research in both countries has shown that food companies vary considerably in their policies and practices on nutrition(1). We aimed to determine if a tailored nutrition support programme for food companies improved their nutrition policies and practices compared with control companies who were not offered the programme. REFORM was a 24-month, two-country, cluster-randomised controlled trial. 132 major packaged food/drink manufacturers (n=96) and fast-food companies (n=36) were randomly assigned (2:1 ratio) to receive a 12-month tailored support programme or to the control group (no intervention). The intervention group was offered a programme designed and delivered by public health academics comprising regular meetings, tailored company reports, and recommendations and resources to improve product composition (e.g., reducing nutrients of concern through reformulation), nutrition labelling (e.g., adoption of HSR labels), marketing to children (reducing the exposure of children to unhealthy products and brands) and improved nutrition policy and corporate sustainability reporting. The primary outcome was the nutrient profile (measured using HSR) of company food and drink products at 24 months. Secondary outcomes were the nutrient content (energy, sodium, total sugar, and saturated fat) of company products, display of HSR labels on packaged products, company nutrition-related policies and commitments, and engagement with the intervention. Eighty-eight eligible intervention companies (9,235 products at baseline) were invited to participate, of whom 21 accepted and were enrolled in the REFORM programme (delivered between September 2021 and December 2022). Forty-four companies (3,551 products at baseline) were randomised to the control arm. At 24 months, the model-adjusted mean HSR of intervention company products was 2.58 compared to 2.68 for control companies, with no significant difference between groups (mean difference -0.10, 95% CI -0.40 to 0.21, p-value 0.53). A per protocol analysis of intervention companies who enrolled in the programme compared to control companies with no major protocol violation also found no significant difference (2.93 vs 2.64, mean difference 0.29, 95% CI -0.13 to 0.72, p-value 0.18). We found no significant differences between the intervention and control groups in any secondary outcome, except in total sugar (g/100g) where the sugar content of intervention company products was higher than that of control companies (12.32 vs 6.98, mean difference 5.34, 95% CI 1.73 to 8.96, p-value 0.004). The per-protocol analysis for sugar did not show a significant difference (10.47 vs 7.44, mean difference 3.03, 95% CI -0.48 to 6.53, p-value 0.09).In conclusion, a 12-month tailored nutrition support for food companies did not improve the nutrient profile of company products.
Objectives/Goals: We describe the prevalence of individuals with household exposure to SARS-CoV-2, who subsequently report symptoms consistent with COVID-19, while having PCR results persistently negative for SARS-CoV-2 (S[+]/P[-]). We assess whether paired serology can assist in identifying the true infection status of such individuals. Methods/Study Population: In a multicenter household transmission study, index patients with SARS-CoV-2 were identified and enrolled together with their household contacts within 1 week of index’s illness onset. For 10 consecutive days, enrolled individuals provided daily symptom diaries and nasal specimens for polymerase chain reaction (PCR). Contacts were categorized into 4 groups based on presence of symptoms (S[+/-]) and PCR positivity (P[+/-]). Acute and convalescent blood specimens from these individuals (30 days apart) were subjected to quantitative serologic analysis for SARS-CoV-2 anti-nucleocapsid, spike, and receptor-binding domain antibodies. The antibody change in S[+]/P[-] individuals was assessed by thresholds derived from receiver operating characteristic (ROC) analysis of S[+]/P[+] (infected) versusS[-]/P[-] (uninfected). Results/Anticipated Results: Among 1,433 contacts, 67% had ≥1 SARS-CoV-2 PCR[+] result, while 33% remained PCR[-]. Among the latter, 55% (n = 263) reported symptoms for at least 1 day, most commonly congestion (63%), fatigue (63%), headache (62%), cough (59%), and sore throat (50%). A history of both previous infection and vaccination was present in 37% of S[+]/P[-] individuals, 38% of S[-]/P[-], and 21% of S[+]/P[+] (P<0.05). Vaccination alone was present in 37%, 41%, and 52%, respectively. ROC analyses of paired serologic testing of S[+]/P[+] (n = 354) vs. S[-]/P[-] (n = 103) individuals found anti-nucleocapsid data had the highest area under the curve (0.87). Based on the 30-day antibody change, 6.9% of S[+]/P[-] individuals demonstrated an increased convalescent antibody signal, although a similar seroresponse in 7.8% of the S[-]/P[-] group was observed. Discussion/Significance of Impact: Reporting respiratory symptoms was common among household contacts with persistent PCR[-] results. Paired serology analyses found similar seroresponses between S[+]/P[-] and S[-]/P[-] individuals. The symptomatic-but-PCR-negative phenomenon, while frequent, is unlikely attributable to true SARS-CoV-2 infections that go missed by PCR.
Previous studies in rodents suggest that mismatch between fetal and postnatal nutrition predisposes individuals to metabolic diseases. We hypothesized that in nonhuman primates (NHP), fetal programming of maternal undernutrition (MUN) persists postnatally with a dietary mismatch altering metabolic molecular systems that precede standard clinical measures. We used unbiased molecular approaches to examine response to a high fat, high-carbohydrate diet plus sugar drink (HFCS) challenge in NHP juvenile offspring of MUN pregnancies compared with controls (CON). Pregnant baboons were fed ad libitum (CON) or 30% calorie reduction from 0.16 gestation through lactation; weaned offspring were fed chow ad libitum. MUN offspring were growth restricted at birth. Liver, omental fat, and skeletal muscle gene expression, and liver glycogen, muscle mitochondria, and fat cell size were quantified. Before challenge, MUN offspring had lower body mass index (BMI) and liver glycogen, and consumed more sugar drink than CON. After HFCS challenge, MUN and CON BMIs were similar. Molecular analyses showed HFCS response differences between CON and MUN for muscle and liver, including hepatic splicing and unfolded protein response. Altered liver signaling pathways and glycogen content between MUN and CON at baseline indicate in utero programming persists in MUN juveniles. MUN catchup growth during consumption of HFCS suggests increased risk of obesity, diabetes, and cardiovascular disease. Greater sugar drink consumption in MUN demonstrates altered appetitive drive due to programming. Differences in blood leptin, liver glycogen, and tissue-specific molecular response to HFCS suggest MUN significantly impacts juvenile offspring ability to manage an energy rich diet.
Auditory verbal hallucinations (AVHs) in schizophrenia have been suggested to arise from failure of corollary discharge mechanisms to correctly predict and suppress self-initiated inner speech. However, it is unclear whether such dysfunction is related to motor preparation of inner speech during which sensorimotor predictions are formed. The contingent negative variation (CNV) is a slow-going negative event-related potential that occurs prior to executing an action. A recent meta-analysis has revealed a large effect for CNV blunting in schizophrenia. Given that inner speech, similar to overt speech, has been shown to be preceded by a CNV, the present study tested the notion that AVHs are associated with inner speech-specific motor preparation deficits.
Objectives
The present study aimed to provide a useful framework for directly testing the long-held idea that AVHs may be related to inner speech-specific CNV blunting in patients with schizophrenia. This may hold promise for a reliable biomarker of AVHs.
Methods
Hallucinating (n=52) and non-hallucinating (n=45) patients with schizophrenia, along with matched healthy controls (n=42), participated in a novel electroencephalographic (EEG) paradigm. In the Active condition, they were asked to imagine a single phoneme at a cue moment while, precisely at the same time, being presented with an auditory probe. In the Passive condition, they were asked to passively listen to the auditory probes. The amplitude of the CNV preceding the production of inner speech was examined.
Results
Healthy controls showed a larger CNV amplitude (p = .002, d = .50) in the Active compared to the Passive condition, replicating previous results of a CNV preceding inner speech. However, both patient groups did not show a difference between the two conditions (p > .05). Importantly, a repeated measure ANOVA revealed a significant interaction effect (p = .007, ηp2 = .05). Follow-up contrasts showed that healthy controls exhibited a larger CNV amplitude in the Active condition than both the hallucinating (p = .013, d = .52) and non-hallucinating patients (p < .001, d = .88). No difference was found between the two patient groups (p = .320, d = .20).
Conclusions
The results indicated that motor preparation of inner speech in schizophrenia was disrupted. While the production of inner speech resulted in a larger CNV than passive listening in healthy controls, which was indicative of the involvement of motor planning, patients exhibited markedly blunted motor preparatory activity to inner speech. This may reflect dysfunction in the formation of corollary discharges. Interestingly, the deficits did not differ between hallucinating and non-hallucinating patients. Future work is needed to elucidate the specificity of inner speech-specific motor preparation deficits with AVHs. Overall, this study provides evidence in support of atypical inner speech monitoring in schizophrenia.
Intimate partner violence (IPV) is a public health challenge negatively affecting victims’ health. Telomere length (TL), a marker for biological ageing, might be reflective of the mechanisms through which IPV leads to adverse health outcomes. The objective of the current study was to explore the association between IPV and leucocyte TL.
Methods
We conducted an analysis using a subset of the UK Biobank (N = 144 049). Physical, sexual and emotional IPV were reported by the participants. DNA was extracted from peripheral blood leukocytes. TL was assayed by quantitative polymerase chain reaction. We used multivariable linear regressions to test the associations between IPV and TL adjusted for age, sex, ethnicity, deprivation, education, as well as symptoms of depression and post-traumatic stress disorder in a sensitivity analysis.
Results
After adjusting for sociodemographic factors, any IPV was associated with 0.02-s.d. shorter TL (β = −0.02, 95% CI −0.04 to −0.01). Of the three types of IPV, physical violence had a marginally stronger association (β = −0.05, 95% CI −0.07 to −0.02) than the other two types. The associations of numbers of IPV and TL showed a dose–response pattern whereby those who experienced all three types of IPV types had the shortest TL (β = −0.07, 95% CI −0.12 to −0.03), followed by those who experienced two types (β = −0.04, 95% CI −0.07 to −0.01). Following additional adjustment for symptoms of depression and PTSD, the associations were slightly attenuated but the general trend by number of IPVs remained.
Conclusions
Victims of IPV, particularly those exposed to multiple types of IPVs, had shorter TL indicative of accelerated biological ageing. Given that all three types of IPV are linked to TL, clinical practitioners need to comprehensively identify all types of IPV and those who received multiple types. Further studies should explore the association of violence with changes in TL over time, as well as to which extent biological ageing is a mechanistic factor.
There is evidence that child maltreatment is associated with shorter telomere length in early life.
Aims
This study aims to examine if child maltreatment is associated with telomere length in middle- and older-age adults.
Method
This was a retrospective cohort study of 141 748 UK Biobank participants aged 37–73 years at recruitment. Leukocyte telomere length was measured with quantitative polymerase chain reaction, and log-transformed and scaled to have unit standard deviation. Child maltreatment was recalled by participants. Linear regression was used to analyse the association.
Results
After adjusting for sociodemographic characteristics, participants with three or more types of maltreatment presented with the shortest telomere lengths (β = −0.05, 95% CI −0.07 to −0.03; P < 0.0001), followed by those with two types of maltreatment (β = −0.02, 95% CI −0.04 to 0.00; P = 0.02), referent to those who had none. When adjusted for depression and post-traumatic stress disorder, the telomere lengths of participants with three or more types of maltreatment were still shorter (β = −0.04, 95% CI −0.07 to −0.02; P = 0.0008). The telomere lengths of those with one type of maltreatment were not significantly different from those who had none. When mutually adjusted, physical abuse (β = −0.05, 95% CI −0.07 to −0.03; P < 0.0001) and sexual abuse (β = −0.02, 95% CI −0.04 to 0.00; P = 0.02) were independently associated with shorter telomere length.
Conclusions
Our findings showed that child maltreatment is associated with shorter telomere length in middle- and older-aged adults, independent of sociodemographic and mental health factors.
Contrasting the well-described effects of early intervention (EI) services for youth-onset psychosis, the potential benefits of the intervention for adult-onset psychosis are uncertain. This paper aims to examine the effectiveness of EI on functioning and symptomatic improvement in adult-onset psychosis, and the optimal duration of the intervention.
Methods
360 psychosis patients aged 26–55 years were randomized to receive either standard care (SC, n = 120), or case management for two (2-year EI, n = 120) or 4 years (4-year EI, n = 120) in a 4-year rater-masked, parallel-group, superiority, randomized controlled trial of treatment effectiveness (Clinicaltrials.gov: NCT00919620). Primary (i.e. social and occupational functioning) and secondary outcomes (i.e. positive and negative symptoms, and quality of life) were assessed at baseline, 6-month, and yearly for 4 years.
Results
Compared with SC, patients with 4-year EI had better Role Functioning Scale (RFS) immediate [interaction estimate = 0.008, 95% confidence interval (CI) = 0.001–0.014, p = 0.02] and extended social network (interaction estimate = 0.011, 95% CI = 0.004–0.018, p = 0.003) scores. Specifically, these improvements were observed in the first 2 years. Compared with the 2-year EI group, the 4-year EI group had better RFS total (p = 0.01), immediate (p = 0.01), and extended social network (p = 0.05) scores at the fourth year. Meanwhile, the 4-year (p = 0.02) and 2-year EI (p = 0.004) group had less severe symptoms than the SC group at the first year.
Conclusions
Specialized EI treatment for psychosis patients aged 26–55 should be provided for at least the initial 2 years of illness. Further treatment up to 4 years confers little benefits in this age range over the course of the study.
Brief measurements of the subjective experience of stress with good predictive capability are important in a range of community mental health and research settings. The potential for large-scale implementation of such a measure for screening may facilitate early risk detection and intervention opportunities. Few such measures however have been developed and validated in epidemiological and longitudinal community samples. We designed a new single-item measure of the subjective level of stress (SLS-1) and tested its validity and ability to predict long-term mental health outcomes of up to 12 months through two separate studies.
Methods
We first examined the content and face validity of the SLS-1 with a panel consisting of mental health experts and laypersons. Two studies were conducted to examine its validity and predictive utility. In study 1, we tested the convergent and divergent validity as well as incremental validity of the SLS-1 in a large epidemiological sample of young people in Hong Kong (n = 1445). In study 2, in a consecutively recruited longitudinal community sample of young people (n = 258), we first performed the same procedures as in study 1 to ensure replicability of the findings. We then examined in this longitudinal sample the utility of the SLS-1 in predicting long-term depressive, anxiety and stress outcomes assessed at 3 months and 6 months (n = 182) and at 12 months (n = 84).
Results
The SLS-1 demonstrated good content and face validity. Findings from the two studies showed that SLS-1 was moderately to strongly correlated with a range of mental health outcomes, including depressive, anxiety, stress and distress symptoms. We also demonstrated its ability to explain the variance explained in symptoms beyond other known personal and psychological factors. Using the longitudinal sample in study 2, we further showed the significant predictive capability of the SLS-1 for long-term symptom outcomes for up to 12 months even when accounting for demographic characteristics.
Conclusions
The findings altogether support the validity and predictive utility of the SLS-1 as a brief measure of stress with strong indications of both concurrent and long-term mental health outcomes. Given the value of brief measures of mental health risks at a population level, the SLS-1 may have potential for use as an early screening tool to inform early preventative intervention work.
Iron deficiency (ID) in early life is associated with morbidities. Most fetal iron required for infant growth is acquired in the third trimester from maternal iron store. However, how prenatal iron level affects ferritin level in early infancy remains controversial. This study aimed to examine the associations between maternal ferritin levels and cord blood serum ferritin (CBSF) and to compare the ferritin levels between different feeding practices in early infancy. Healthy Chinese mothers with uncomplicated pregnancy and their infants were followed up at 3 months post-delivery for questionnaire completion and infant blood collection. Infants who were predominantly breastfed and those who were predominantly formula fed were included in this analysis. Serum ferritin levels were measured in maternal blood samples collected upon delivery, cord blood and infant blood samples at 3 months of age. Ninety-seven mother–baby dyads were included. Maternal ID is common (56 %) while the CBSF levels were significantly higher than maternal ferritin levels. Only three infants (3 %) had ID at 3 months of age. There were no significant correlations between maternal ferritin levels with CBSF (r 0·168, P = 0·108) nor with infant ferritin levels at 3 months of age (r 0·023, P = 0·828). Infant ferritin levels at 3 months were significantly and independently associated with CBSF (P = 0·007) and birth weight (P < 0·001) after adjusting for maternal age, parity, maternal education, infant sex and feeding practice. In conclusion, maternal ID was common upon delivery. However, maternal ferritin levels were not significantly associated with CBSF concentrations nor infant ferritin concentrations at 3 months of age.
To identify a posteriori dietary patterns among women planning pregnancy and assess the reproducibility of these patterns in a subsample using two dietary assessment methods.
Design:
A semi-quantitative FFQ was administered to women enrolled in the Singapore PREconception Study of long-Term maternal and child Outcomes study. Dietary patterns from the FFQ were identified using exploratory factor analysis (EFA). In a subsample of women (n 289), 3-d food diaries (3DFD) were also completed and analysed. Reproducibility of the identified patterns was assessed using confirmatory factor analysis (CFA) in the subsample, and goodness of fit of the CFA models was examined using several fit indices. Subsequently, EFA was conducted in the subsample and dietary patterns of the FFQ and the 3DFD were compared.
Setting:
Singapore.
Participants:
1007 women planning pregnancy (18–45 years).
Results:
Three dietary patterns were identified from the FFQ: the ‘Fish, Poultry/Meat and Noodles’ pattern was characterised by higher intakes of fish, poultry/meat and noodles in soup; ‘Fast Food and Sweetened Beverages’ pattern was characterised by higher intakes of fast food, sweetened beverages and fried snacks; ‘Bread, Legumes and Dairy’ pattern was characterised by higher intakes of buns/ethnic breads, nuts/legumes and dairy products. The comparative fit indices from the CFA models were 0·79 and 0·34 for the FFQ and 3DFD of the subsample, respectively. In the subsample, three similar patterns were identified in the FFQ while only two for the 3DFD.
Conclusions:
Dietary patterns from the FFQ are reproducible within this cohort, providing a basis for future investigations on diet and health outcomes.
The risk factors of criminal behavior in patients with schizophrenia are not well explored. This study is to explore the risk factors for criminal behavior in patients with schizophrenia in rural China.
Methods
We used data from a 14-year prospective follow-up study (1994-2008) of criminal behavior among a cohort (n=510) of patients with schizophrenia in Xinjin County, Chengdu, China.
Results
There were 489 patients (95.9%) who were followed up from 1994 to 2008. The rate of criminal behavior was 13.5% among these patients with schizophrenia during the follow-up period. Compared with female subjects (6 cases, 20.0%), male patients had significantly higher rate of violent criminal behavior (e.g., arson, sexual assault, physical assault, and murder) (24 cases, 80.0%) (p< 0.001). Bivariate analyses showed that the risk of criminal behavior was significantly associated with being unmarried, of younger age, previous violent behavior, homelessness, lower family economic status, no family caregivers, and higher scores on measures (PANSS) of positive, negative, and total symptoms of illness. In multiple logistic regression analyses being unmarried and previous violent behavior were identified as independent predictors of increased criminal behavior in persons with schizophrenia.
Conclusions
The risk factors for criminal behavior among patients with schizophrenia should be understood within a particular social context. Criminal behavior may be predicted by specific characteristics of patients with schizophrenia in rural community. The findings of risk factors for criminal behavior should be considered in planning community mental health care and interventions for high-risk patients and their families.
Abnormal effort-based decision-making represents a potential mechanism underlying motivational deficits (amotivation) in psychotic disorders. Previous research identified effort allocation impairment in chronic schizophrenia and focused mostly on physical effort modality. No study has investigated cognitive effort allocation in first-episode psychosis (FEP).
Method
Cognitive effort allocation was examined in 40 FEP patients and 44 demographically-matched healthy controls, using Cognitive Effort-Discounting (COGED) paradigm which quantified participants’ willingness to expend cognitive effort in terms of explicit, continuous discounting of monetary rewards based on parametrically-varied cognitive demands (levels N of N-back task). Relationship between reward-discounting and amotivation was investigated. Group differences in reward-magnitude and effort-cost sensitivity, and differential associations of these sensitivity indices with amotivation were explored.
Results
Patients displayed significantly greater reward-discounting than controls. In particular, such discounting was most pronounced in patients with high levels of amotivation even when N-back performance and reward base amount were taken into consideration. Moreover, patients exhibited reduced reward-benefit sensitivity and effort-cost sensitivity relative to controls, and that decreased sensitivity to reward-benefit but not effort-cost was correlated with diminished motivation. Reward-discounting and sensitivity indices were generally unrelated to other symptom dimensions, antipsychotic dose and cognitive deficits.
Conclusion
This study provides the first evidence of cognitive effort-based decision-making impairment in FEP, and indicates that decreased effort expenditure is associated with amotivation. Our findings further suggest that abnormal effort allocation and amotivation might primarily be related to blunted reward valuation. Prospective research is required to clarify the utility of effort-based measures in predicting amotivation and functional outcome in FEP.
Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9.
Methods
We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
Results
16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
Conclusions
PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification.
Aims
To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics.
Method
Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
Results
A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97).
Conclusions
The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.
Declaration of interest
Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
Introduction: Burnout rates for emergency physicians (EP) continue to be amongst the highest in medicine. One of the commonly cited sources of stress contributing to disillusionment is bureaucratic tasks that distract EPs from direct patient care in the emergency department (ED). The novel position of Physician Navigator was created to help EPs decrease their non-clinical workload during shifts, and improve productivity. Physician Navigators are non-licensed healthcare team members that assist in activities which are often clerical in nature, but directly impact patient care. This program was implemented at no net-cost to the hospital or healthcare system. Methods: In this retrospective study, 6845 clinical shifts worked by 20 EPs over 39 months from January 1, 2012 to March 31, 2015 were evaluated. The program was implemented on April 1, 2013. The primary objective was to quantify the effect of Physician Navigators on measures of EP productivity: patient seen per hour (Pt/hr), and turn-around-time (TAT) to discharge. Secondary objectives included examining the impact of Physician Navigators on measures of ED throughput for non-resuscitative patients: emergency department length of stay (LOS), physician-initial-assessment times (PIA), and left-without-being-seen rates (LWBS). A mixed linear model was used to evaluate changes in productivity measures between shifts with and without Physician Navigators in a clustered design, by EP. Autoregressive modelling was performed to compare ED throughput metrics before and after the implementation of Physician Navigators for non-resuscitative patients. Results: Across 20 EPs, 2469 shifts before, and 4376 shifts after April 1, 2013 were analyzed. Daily patient volumes increased 8.7% during the period with Physician Navigators. For the EPs who used Physician Navigators, Pt/hr increased by 1.07 patients per hour (0.98 to 1.16, p<0.001), and TAT to discharge decreased by 10.6 minutes (-13.2 to -8.0, p<0.001). After the implementation of the Physician Navigators, overall LOS for non-resuscitative patients decreased by 2.6 minutes (1.0%, p=0.007), and average PIA decreased by 7.4 minutes (12.0%, p<0.001). LBWS rates decreased by 43.9% (0.50% of daily patient volume, p<0.001). Conclusion: The use of a Physician Navigator was associated with increased EP productivity as measured by Pt/hr, and TAT to discharge, and reductions in ED throughput metrics for non-resuscitative patients.
Faster eating rates are associated with increased energy intake, but little is known about the relationship between children’s eating rate, food intake and adiposity. We examined whether children who eat faster consume more energy and whether this is associated with higher weight status and adiposity. We hypothesised that eating rate mediates the relationship between child weight and ad libitum energy intake. Children (n 386) from the Growing Up in Singapore Towards Healthy Outcomes cohort participated in a video-recorded ad libitum lunch at 4·5 years to measure acute energy intake. Videos were coded for three eating-behaviours (bites, chews and swallows) to derive a measure of eating rate (g/min). BMI and anthropometric indices of adiposity were measured. A subset of children underwent MRI scanning (n 153) to measure abdominal subcutaneous and visceral adiposity. Children above/below the median eating rate were categorised as slower and faster eaters, and compared across body composition measures. There was a strong positive relationship between eating rate and energy intake (r 0·61, P<0·001) and a positive linear relationship between eating rate and children’s BMI status. Faster eaters consumed 75 % more energy content than slower eating children (Δ548 kJ (Δ131 kcal); 95 % CI 107·6, 154·4, P<0·001), and had higher whole-body (P<0·05) and subcutaneous abdominal adiposity (Δ118·3 cc; 95 % CI 24·0, 212·7, P=0·014). Mediation analysis showed that eating rate mediates the link between child weight and energy intake during a meal (b 13·59; 95 % CI 7·48, 21·83). Children who ate faster had higher energy intake, and this was associated with increased BMI z-score and adiposity.
Restoration in Mediterranean-climate grasslands is strongly impeded by lack of native propagules and competition with exotic grasses and forbs. We report on a study testing several methods for exotic plant control combined with planting native grasses to restore prairies in former agricultural land in coastal California. Specifically we compared tarping (shading out recently germinated seedlings with black plastic) once, tarping twice, topsoil removal, herbicide (glyphosate), and a control treatment in factorial combinations with or without wood mulch. Into each treatment we planted three native grass species (Elymus glaucus, Hordeum brachyantherum, and Stipa pulchra) and monitored plant survival and cover for three growing seasons. Survival of native grass species was high in all treatments, but was slightly lower in unmulched soil removal and control treatments in the first 2 yr. Mulching, tarping, and herbicide were all effective in reducing exotic grass cover and enhancing native grass cover for the first 2 yr, but by the third growing season cover of the plant guilds and bare ground had mostly converged, primarily because of the declining effects of the initial treatments. Mulching and tarping were both considerably more expensive than herbicide treatment. Topsoil removal was less effective in increasing native grass cover likely because soil removal altered the surface hydrology in this system. Our results show that several treatments were effective in enhancing native grass establishment, but that longer term monitoring is needed to evaluate the efficacy of restoration efforts. The most appropriate approach to controlling exotics to restore specific grassland sites will depend not only on the effectiveness, but also on relative costs and site constraints.