We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The psychometric and classification literatures have illustrated the fact that a wide class of discrete or network models (e.g., hierarchical or ultrametric trees) for the analysis of ordinal proximity data are plagued by potential degenerate solutions if estimated using traditional nonmetric procedures (i.e., procedures which optimize a STRESS-based criteria of fit and whose solutions are invariant under a monotone transformation of the input data). This paper proposes a new parametric, maximum likelihood based procedure for estimating ultrametric trees for the analysis of conditional rank order proximity data. We present the technical aspects of the model and the estimation algorithm. Some preliminary Monte Carlo results are discussed. A consumer psychology application is provided examining the similarity of fifteen types of snack/breakfast items. Finally, some directions for future research are provided.
In failing to define the units in which the stimulus is to be measured, the Weber law might seem to make no definite assertion, and indeed, it is shown that any single empirical function, supposed to relate a given stimulus intensity with that intensity which is just noticeably greater, can be put into the Weber form by a suitable change of scale in which the stimulus intensity is to be measured. Nevertheless, it turns out that if different individuals have different Weber functions, when the intensities are measured on a given scale, then it is by no means always possible to transform the scale so that all of the functions can take on the Weber form. Some necessary conditions are given for the possibility of such a transformation when there is at hand a finite number of functions, and when the functions depend upon a single parameter the necessary and sufficient condition is easily derived. The same discussion leads to a generalization of Thurstone's psychophysical scale and shows that such a scale is always possible.
On viewing Thurstone's psychophysical scale from the point of view of the mathematical theory of one-parameter continuous groups, it is seen that a variety of different psychological or statistical assumptions can all be made to lead to a scale possessing similar properties, though requiring different computational techniques for their determination. The natural extension to multi-dimensional scaling is indicated.
Necessary and sufficient conditions are given for a set of numbers to be the mutual distances of a set of real points in Euclidean space, and matrices are found whose ranks determine the dimension of the smallest Euclidean space containing such points. Methods are indicated for determining the configuration of these points, and for approximating to them by points in a space of lower dimensionality.
It is shown that invariance requirements remove the indeterminacy in factor determination and lead to an integration of factorial studies with promise of considerable reduction in computational labor. The selection of significant primary factors is discussed, with special reference to Thurstone's simple structure criterion.
An array of information about the Antarctic ice sheet can be extracted from ice-sheet internal architecture imaged by airborne ice-penetrating radar surveys. We identify, trace and date three key internal reflection horizons (IRHs) across multiple radar surveys from South Pole to Dome A, East Antarctica. Ages of ~38 ± 2.2, ~90 ± 3.6 and ~162 ± 6.7 ka are assigned to the three IRHs, with verification of the upper IRH age from the South Pole ice core. The resultant englacial stratigraphy is used to identify the locations of the oldest ice, specifically in the upper Byrd Glacier catchment and the Gamburtsev Subglacial Mountains. The distinct glaciological conditions of the Gamburtsev Mountains, including slower ice flow, low geothermal heat flux and frozen base, make it the more likely to host the oldest ice. We also observe a distinct drawdown of IRH geometry around South Pole, indicative of melting from enhanced geothermal heat flux or the removal of deeper, older ice under a previous faster ice flow regime. Our traced IRHs underpin the wider objective to develop a continental-scale database of IRHs which will constrain and validate future ice-sheet modelling and the history of the Antarctic ice sheet.
The UK government launched a two-component sugar-reduction programme in 2016, one component is the taxation of sugar-sweetened beverages, the Soft Drinks Industry Levy, and the second is a voluntary sugar reduction programme for products contributing most to children’s sugar intakes. These policies provided incentives both for industry to change the products they sell and for people to change their food and beverage choices through a ‘signalling’ effect that has raised awareness of excess sugar intakes in the population. In this study, we aimed to identify the relative contributions of the supply- and demand-side drivers of changes in the sugar density of food and beverages purchased in Great Britain. While we found that both supply- and demand-side drivers contributed to decreasing the sugar density of beverage purchases (reformulation led to a 19 % reduction, product renewal 14 %, and consumer switching between products 8 %), for food products it was mostly supply-side drivers (reformulation and product renewal). Reformulation contributed consistently to a decrease in the sugar density of purchases across households, whereas changes in consumer choices were generally in the opposite direction, offsetting benefits of reformulation. We studied the social gradient of sugar density reduction for breakfast cereals, achieved mostly by reformulation, and found increased reductions in sugar purchased by households of lower socio-economic status. Conversely, there was no social gradient for soft drinks. We conclude that taxes and reformulation incentives are complementary and combining them in a programme to improve the nutritional quality of foods increases the probability of improvements in diet quality.
People living with mental illness report a broad spectrum of nutrition risks, beyond malnutrition, but appropriate and adequately validated nutrition risk screening tools for mental health settings are lacking. This study aimed to develop a nutrition-risk screening tool, the NutriMental Screener, and to perform preliminary feasibility and validity testing. In an international, stakeholder engaging approach, a multifaceted nutrition-risk screening tool for mental health services was developed by means of workshops with international stakeholders and two online surveys. Feasibility of the NutriMental screener was tested as part of a research study in Switzerland with 196 participants, evenly distributed across the three study groups (sixty-seven outpatients and sixty-five inpatients with psychotic or depressive disorders as well as sixty-four controls without mental illness). The NutriMental screener consists of ten items covering different nutritional issues that indicate the need for referral to a dietitian or clinical nutritionist. Almost all patients (94·7 %) reported at least one nutrition risk by means of the NutriMental screener. Prevalence for nutrition risks via NutriMental screener was higher in patients than in controls. Almost every second patient expressed a desire for nutritional support (44·7 %). After further validity testing is completed, there is the potential for the NutriMental Screener to replace malnutrition screening tools as routine screening in various mental health settings aiming to organise nutritional therapy prescriptions in a more targeted and efficient manner.
Inflammation and infections such as malaria affect micronutrient biomarker concentrations and hence estimates of nutritional status. It is unknown whether correction for C-reactive protein (CRP) and α1-acid glycoprotein (AGP) fully captures the modification in ferritin concentrations during a malaria infection, or whether environmental and sociodemographic factors modify this association. Cross-sectional data from eight surveys in children aged 6–59 months (Cameroon, Cote d’Ivoire, Kenya, Liberia, Malawi, Nigeria and Zambia; n 6653) from the Biomarkers Reflecting Inflammation and Nutritional Determinants of Anaemia (BRINDA) project were pooled. Ferritin was adjusted using the BRINDA adjustment method, with values < 12 μg/l indicating iron deficiency. The association between current or recent malaria infection, detected by microscopy or rapid test kit, and inflammation-adjusted ferritin was estimated using pooled multivariable linear regression. Age, sex, malaria endemicity profile (defined by the Plasmodium falciparum infection prevalence) and malaria diagnostic methods were examined as effect modifiers. Unweighted pooled malaria prevalence was 26·0 % (95 % CI 25·0, 27·1) and unweighted pooled iron deficiency was 41·9 % (95 % CI 40·7, 43·1). Current or recent malaria infection was associated with a 44 % (95 % CI 39·0, 52·0; P < 0·001) increase in inflammation-adjusted ferritin after adjusting for age and study identifier. In children, ferritin increased less with malaria infection as age and malaria endemicity increased. Adjustment for malaria increased the prevalence of iron deficiency, but the effect was small. Additional information would help elucidate the underlying mechanisms of the role of endemicity and age in the association between malaria and ferritin.
This editorial considers the value and nature of academic psychiatry by asking what defines the specialty and psychiatrists as academics. We frame academic psychiatry as a way of thinking that benefits clinical services and discuss how to inspire the next generation of academics.
The idea that some abilities might be enhanced by adversity is gaining traction. Adaptation-based approaches have uncovered a few specific abilities enhanced by particular adversity exposures. Yet, for a field to grow, we must not dig too deep, too soon. In this paper, we complement confirmatory research with principled exploration. We draw on two insights from adaptation-based research: 1) enhanced performance manifests within individuals, and 2) reduced and enhanced performance can co-occur. Although commonly assumed, relative performance differences are rarely tested. To quantify them, we need a wide variety of ability measures. However, rather than using adaptive logic to predict which abilities are enhanced or reduced, we develop statistical criteria to identify three data patterns: reduced, enhanced, and intact performance. With these criteria, we analyzed data from the National Institute of Child Health and Human Development Study of Early Child Care and Youth Development to investigate how adversity shapes within-person performance across 10 abilities in a cognitive and achievement battery. Our goals are to document adversity-shaped cognitive performance patterns, identify drivers of reduced performance, identify sets of “intact” abilities, and discover new enhanced abilities. We believe principled exploration with clear criteria can help break new theoretical and empirical ground, remap old territory, and advance theory development.
Anatomical Therapeutic Chemical (ATC) indication-based classification system is the World Health Organization (WHO) drug classification system and it is widely used in clinical and researh practice, however there has been questions around the scientific base of this (1, 2). Neuroscience-based Nomenclature (NbN) has been developed by representatives from 5 international organizations, with specific expertise in psychopharmacology, to address the issues around neuropsychopharmacological drug classification and improve the focus on pharmacological domains and mode of action:
ECNP – European College of Neuropsychopharmacology
ACNP – American College of Neuropsychopharmacology
AsCNP – Asian College of Neuropsychopharmacology
CINP – International College of Neuropsychopharmacology
IUPHAR – International Union of Basic and Clinical Pharmacology
References:
1. Nutt DJ. Beyond psychoanaleptics - can we improve antidepressant drug nomenclature? [published correction appears in J Psychopharmacol. 2009 Sept;23(7):861]. J Psychopharmacol. 2009;23(4):343-345. doi:10.1177/0269881109105498
2. Zohar J, Stahl S, Moller HJ, et al. A review of the current nomenclature for psychotropic agents and an introduction to the Neuroscience-based Nomenclature. Eur Neuropsychopharmacol. 2015;25(12):2318-2325. doi:10.1016/j.euroneuro.2015.08.019
Objectives
As NbN is a novel classification system that can be used as a teaching tool as well as for other purposes, we aimed to understand the experience, views and needs of the psychiatric trainees and early career psychiatrists who will shape the future of psychiatry, around drug classification systems.
Methods
The ethical clearance of the study was obtained from King’s College London. We prepared an online survey (https://forms.gle/FCSdVTFH4U5QNn5t8) with a multinational group of early career pscyhiatrists who met through the CINP and EFPT, and test-run the survey with a small group of psychiatric trainees. The online survey was then disseminated via emailing lists and groups of early careers psychiatrists as well as through social media.
Results
At the time of this abstract submission, the data collection is ongoing. Results will include analyses of the experience with different drug classifcations systems, awareness, views and attainment of NbN, stratified according to the demographic data (country, careers status, main work setting).
Conclusions
The findings from this study will shed light on the views and needs of early career psychiatrists on the topic from clinical and academic aspects, a previously unexplored perspective on drug classification systems. The findings can inform the planning of various strategies to address areas to improve the use and teaching of these tools.
Digital Mental Health Interventions (DMHIs) that meet the definition of a medical device are regulated by the Medicines and Healthcare products Regulatory Agency (MHRA) in the UK. The MHRA uses procedures that were originally developed for pharmaceuticals to assess the safety of DMHIs. There is recognition that this may not be ideal, as is evident by an ongoing consultation for reform led by the MHRA and the National Institute for Health and Care Excellence.
Aims
The aim of this study was to generate an experts’ consensus on how the medical regulatory method used for assessing safety could best be adapted for DMHIs.
Method
An online Delphi study containing three rounds was conducted with an international panel of 20 experts with experience/knowledge in the field of UK digital mental health.
Results
Sixty-four items were generated, of which 41 achieved consensus (64%). Consensus emerged around ten recommendations, falling into five main themes: Enhancing the quality of adverse events data in DMHIs; Re-defining serious adverse events for DMHIs; Reassessing short-term symptom deterioration in psychological interventions as a therapeutic risk; Maximising the benefit of the Yellow Card Scheme; and Developing a harmonised approach for assessing the safety of psychological interventions in general.
Conclusion
The implementation of the recommendations provided by this consensus could improve the assessment of safety of DMHIs, making them more effective in detecting and mitigating risk.
Unhealthy food environments are major drivers of obesity and diet-related diseases(1). Improving the healthiness of food environments requires a widespread organised response from governments, civil society, and industry(2). However, current actions often rely on voluntary participation by industry, such as opt-in nutrition labelling schemes, school/workplace food guidelines, and food reformulation programmes. The aim of the REFORM study is to determine the effects of the provision of tailored support to companies on their nutrition-related policies and practices, compared to food companies that are not offered the programme (the control). REFORM is a two-country, parallel cluster randomised controlled trial. 150 food companies were randomly assigned (2:1 ratio) to receive either a tailored support intervention programme or no intervention. Randomisation was stratified by country (Australia, New Zealand), industry sector (fast food, other packaged food/beverage companies), and company size. The primary outcome is the nutrient profile (measured using Health Star Rating [HSR]) of foods and drinks produced by participating companies at 24 months post-baseline. Secondary outcomes include company nutrition policies and commitments, the nutrient content (sodium, sugar, saturated fat) of products produced by participating companies, display of HSR labels, and engagement with the intervention. Eighty-three eligible intervention companies were invited to take part in the REFORM programme and 21 (25%) accepted and were enrolled. Over 100 meetings were held with company representatives between September 2021 and December 2022. Resources and tailored reports were developed for 6 touchpoints covering product composition and benchmarking, nutrition labelling, consumer insights, nutrition policies, and incentives for companies to act on nutrition. Detailed information on programme resources and preliminary 12-month findings will be presented at the conference. The REFORM programme will assess if provision of tailored support to companies on their nutrition-related policies and practices incentivises the food industry to improve their nutrition policies and actions.
The diagnosis of functional constipation (FC) relies on patient-reported outcomes evaluated as criteria based on the clustering of symptoms. Although the ROME IV criteria for FC diagnosis is relevant for a multicultural population(1), how an individual’s lifestyle, environment and culture may influence the pathophysiology of FC remains a gap in our knowledge. Building on insights into mechanisms underpinning disorders of gut-brain interactions (formerly functional gastrointestinal disorders) in the COMFORT Cohort(2), this study aimed to investigate the differences in gastrointestinal (GI) symptom scores among participants with FC in comparison to healthy controls between Chinese and non-Chinese New Zealanders. The Gastrointestinal Understanding of Functional Constipation In an Urban Chinese and Urban non-Chinese New Zealander Cohort (GUTFIT) study was a longitudinal cohort study, which aimed to determine a comprehensive profile of characteristics and biological markers of FC between Chinese and non-Chinese New Zealanders. Chinese (classified according to maternal and paternal ethnicity) or non-Chinese (mixed ethnicities) adults living in Auckland classified as with or without FC based on ROME IV were enrolled. Monthly assessment (for 3 months) of GI symptoms, anthropometry, quality of life, diet, and biological samples were assessed monthly over March to June 2023. Demographics were obtained through a self-reported questionnaires and GI symptoms were assessed using the Gastrointestinal Symptom Rating Scale (GSRS) and Structured Assessment of Gastrointestinal Symptoms Scale (SAGIS). This analysis is a cross-sectional assessment of patient-reported outcomes of GI symptoms. Of 78 enrolled participants, 66 completed the study (male, n = 10; female, n = 56) and were distributed across: Chinese with FC (Ch-FC; n = 11), Chinese control (Ch-CON; n = 19), non-Chinese with FC (NCh-FC; n = 16), non-Chinese control (NCh-CON; n = 20). Mean (SD) age, body mass index, and waist circumference were 40 ± 9 years, 22.7 ± 2.5 kg/m2, and 78.0 ± 7.6 cm, respectively. Ethnicity did not impact SAGIS domain scores for GI symptoms (Ethnicity x FC severity interaction p>0.05). Yet, the constipation symptoms domain of the GSRS was scored differently depending on ethnicity and FC status (Ethnicity x FC interaction p<0.05). In post hoc comparison, NCh-FC tended to have higher GSRS constipation severity scores than Ch-FC (3.4 ± 1.0 versus 3.8 ± 0.8 /8, p<0.1) Although constipation symptom severity tended to be higher in NCh-FC, on the whole, ethnicity did not explain variation in this cohort. FC status was a more important predictor of GI symptoms scores. Future research will assess differences in symptom burden to explore ethnicity-specific characteristics of FC.
Young children, especially those under one year of age, are at higher risk of choking on food due to their body’s immature physiology and chewing, swallowing and coughing ability(1). In 2020, the Ministry of Education mandated the Ministry of Health’s food-related choking guidance for babies and young children at early learning services (ELS), adding it to the licensing criteria(2). Some ELS managers reported that this policy may negatively influence the food and nutrition environment within ELS(3).This study aimed to assess the impact of the food-related choking policy on the food and nutrition environment within ELS. Data were collected using an online Qualtrics questionnaire from ELS in four District Health Board regions: Waikato, Bay of Plenty, Lakes, and Auckland (N = 1066), sourced from the Ministry of Education, Education Counts database. Responses were received from 179 ELS (17%) and most reported making changes due to the food-related choking guidance. The main changes were to the food provided by the ELS (75%), education for whānau/family (73%), and supervision of children (70%). Over half of the centres reported adjusting staff duties to allow for increased supervision of eating (60%) and changed/ceased celebrations or fundraisers (58%). Over half of the respondents (55%) reported that changes to reduce the risk of food-related choking had affected the ‘cultural kaupapa’ (plan/policy) of the ELS. A key theme from written responses was that centres had ‘not come together as whānau’, which refers to reduced hosting of centre events/celebrations within the centre and externally with children and whānau (families). The main reason appeared to be that the food restrictions in the guidance made the management of ‘shared kai (food)’ too difficult. Approximately two-thirds of centres (61%) reported removing foods from menus, and around half (49%) modified the texture of foods. Fifty-one per cent of ELS reported that there had been no change in parent-supplied food. The main foods removed from ELS menus were fruit, vegetables, hard crackers, sausages/other meats, and popcorn. Soft fruit, e.g., canned fruit, soft crackers, and soft meats (hamburger patties, mince, luncheon, and ham), were the main foods added to menus. ELS have responded to most of the new food-related choking guidance requirements regarding food provision, texture modification, and supervision; however, some ELS may need support to implement fully. Ceasing shared kai events at ELS has reduced opportunities to engage with whānau and limits cultural expression, connection and reciprocal learning and teaching about food and nutrition between the centre and whānau as outlined in Te Whariki Early Childhood Curriculum. Improved communication and support for parents and ELS to implement the recommendations for home and centre-supplied foods is needed. Together with sufficient funding for supervision and nutrition education to support children’s learning and cultural needs around food.
Distinct pathophysiology has been identified with disorders of gut-brain interactions (DGBI), including functional constipation (FC)(1,2), yet the causes remain unclear. Identifying how modifiable factors (i.e., diet) differ depending on gastrointestinal health status is important to understand relationships between dietary intake, pathophysiology, and disease burden of FC. Given that dietary choices are culturally influenced, understanding ethnicity-specific diets of individuals with FC is key to informing appropriate symptom management and prevention strategies. Despite distinct genetic and cultural features of Chinese populations with increasing FC incidence(3), DGBI characteristics are primarily described in Caucasian populations(2). We therefore aimed to identify how dietary intake of Chinese individuals with FC differs to non-Chinese individuals with FC, relative to healthy controls. The Gastrointestinal Understanding of Functional Constipation In an Urban Chinese and Urban non-Chinese New Zealander Cohort (GUTFIT) study was a longitudinal case-control study using systems biology to investigate the multi-factorial aetiology of FC. Here we conducted a cross-sectional dietary intake assessment, comparing Chinese individuals with FC (Ch-FC) against three control groups: a) non-Chinese with FC (NCh-FC) b) Chinese without FC (Ch-CON) and c) non-Chinese without FC (NCh-CON). Recruitment from Auckland, New Zealand (NZ) identified Chinese individuals based on self-identification alongside both parents self-identifying as Chinese, and FC using the ROME IV criteria. Dietary intake was captured using 3-day food diaries recorded on consecutive days, including one weekend day. Nutrient analysis was performed by Foodworks 10 and statistical analysis with SPSS using a generalised linear model (ethnicity and FC status as fixed factors). Of 78 enrolled participants, 66 completed the study and 64 (39.4 ± 9.2 years) completed a 3-day food diary at the baseline assessment. More participants were female (84%) than male (16%). FC and ethnicity status allocated participants into 1 of 4 groups: Ch-FC (n = 11), Ch-CON (n = 18), NCh-FC (n = 16), NCh-CON (n = 19). Within NCh, ethnicities included NZ European (30%), non-Chinese Asian (11%), Other European (11%), and Latin American (2%). Fibre intake did not differ between Ch-FC and NCh-FC (ethnicity × FC status interaction p>0.05) but was independently lower overall for FC than CON individuals (21.8 ± 8.7 versus 27.0 ± 9.7 g, p<0.05) and overall for Ch than NCh (22.1 ± 8.0 versus 27.0 ± 10.4 g, p<0.05). Carbohydrate, protein, and fat intakes were not different across groups (p>0.05 each, respectively). In the context of fibre and macronutrient intake, there is no difference between Ch-FC and NCh-FC. Therefore, fibre and macronutrients are unlikely to contribute to potential pathophysiological differences in FC between ethnic groups. A more detailed assessment of dietary intake concerning micronutrients, types of fibre, or food choices may be indicated to ascertain whether other dietary differences exist.
The reading the mind in the eyes test (RMET) – which assesses the theory of mind component of social cognition – is often used to compare social cognition between patients with schizophrenia and healthy controls. There is, however, no systematic review integrating the results of these studies. We identified 198 studies published before July 2020 that administered RMET to patients with schizophrenia or healthy controls from three English-language and two Chinese-language databases. These studies included 41 separate samples of patients with schizophrenia (total n = 1836) and 197 separate samples of healthy controls (total n = 23 675). The pooled RMET score was 19.76 (95% CI 18.91–20.60) in patients and 25.53 (95% CI 25.19–25.87) in controls (z = 12.41, p < 0.001). After excluding small-sample outlier studies, this difference in RMET performance was greater in studies using non-English v. English versions of RMET (Chi [Q] = 8.54, p < 0.001). Meta-regression analyses found a negative association of age with RMET score and a positive association of years of schooling with RMET score in both patients and controls. A secondary meta-analysis using a spline construction of 180 healthy control samples identified a non-monotonic relationship between age and RMET score – RMET scores increased with age before 31 and decreased with age after 31. These results indicate that patients with schizophrenia have substantial deficits in theory of mind compared with healthy controls, supporting the construct validity of RMET as a measure of social cognition. The different results for English versus non-English versions of RMET and the non-monotonic relationship between age and RMET score highlight the importance of the language of administration of RMET and the possibility that the relationship of aging with theory of mind is different from the relationship of aging with other types of cognitive functioning.
Children with CHD or born very preterm are at risk for brain dysmaturation and poor neurodevelopmental outcomes. Yet, studies have primarily investigated neurodevelopmental outcomes of these groups separately.
Objective:
To compare neurodevelopmental outcomes and parent behaviour ratings of children born term with CHD to children born very preterm.
Methods:
A clinical research sample of 181 children (CHD [n = 81]; very preterm [≤32 weeks; n = 100]) was assessed at 18 months.
Results:
Children with CHD and born very preterm did not differ on Bayley-III cognitive, language, or motor composite scores, or on expressive or receptive language, or on fine motor scaled scores. Children with CHD had lower ross motor scaled scores compared to children born very preterm (p = 0.047). More children with CHD had impaired scores (<70 SS) on language composite (17%), expressive language (16%), and gross motor (14%) indices compared to children born very preterm (6%; 7%; 3%; ps < 0.05). No group differences were found on behaviours rated by parents on the Child Behaviour Checklist (1.5–5 years) or the proportion of children with scores above the clinical cutoff. English as a first language was associated with higher cognitive (p = 0.004) and language composite scores (p < 0.001). Lower median household income and English as a second language were associated with higher total behaviour problems (ps < 0.05).
Conclusions:
Children with CHD were more likely to display language and motor impairment compared to children born very preterm at 18 months. Outcomes were associated with language spoken in the home and household income.