We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
New Zealand and Australian governments rely heavily on voluntary industry initiatives to improve population nutrition, such as voluntary front-of-pack nutrition labelling (Health Star Rating [HSR]), industry-led food advertising standards, and optional food reformulation programmes. Research in both countries has shown that food companies vary considerably in their policies and practices on nutrition(1). We aimed to determine if a tailored nutrition support programme for food companies improved their nutrition policies and practices compared with control companies who were not offered the programme. REFORM was a 24-month, two-country, cluster-randomised controlled trial. 132 major packaged food/drink manufacturers (n=96) and fast-food companies (n=36) were randomly assigned (2:1 ratio) to receive a 12-month tailored support programme or to the control group (no intervention). The intervention group was offered a programme designed and delivered by public health academics comprising regular meetings, tailored company reports, and recommendations and resources to improve product composition (e.g., reducing nutrients of concern through reformulation), nutrition labelling (e.g., adoption of HSR labels), marketing to children (reducing the exposure of children to unhealthy products and brands) and improved nutrition policy and corporate sustainability reporting. The primary outcome was the nutrient profile (measured using HSR) of company food and drink products at 24 months. Secondary outcomes were the nutrient content (energy, sodium, total sugar, and saturated fat) of company products, display of HSR labels on packaged products, company nutrition-related policies and commitments, and engagement with the intervention. Eighty-eight eligible intervention companies (9,235 products at baseline) were invited to participate, of whom 21 accepted and were enrolled in the REFORM programme (delivered between September 2021 and December 2022). Forty-four companies (3,551 products at baseline) were randomised to the control arm. At 24 months, the model-adjusted mean HSR of intervention company products was 2.58 compared to 2.68 for control companies, with no significant difference between groups (mean difference -0.10, 95% CI -0.40 to 0.21, p-value 0.53). A per protocol analysis of intervention companies who enrolled in the programme compared to control companies with no major protocol violation also found no significant difference (2.93 vs 2.64, mean difference 0.29, 95% CI -0.13 to 0.72, p-value 0.18). We found no significant differences between the intervention and control groups in any secondary outcome, except in total sugar (g/100g) where the sugar content of intervention company products was higher than that of control companies (12.32 vs 6.98, mean difference 5.34, 95% CI 1.73 to 8.96, p-value 0.004). The per-protocol analysis for sugar did not show a significant difference (10.47 vs 7.44, mean difference 3.03, 95% CI -0.48 to 6.53, p-value 0.09).In conclusion, a 12-month tailored nutrition support for food companies did not improve the nutrient profile of company products.
Recent changes to US research funding are having far-reaching consequences that imperil the integrity of science and the provision of care to vulnerable populations. Resisting these changes, the BJPsych Portfolio reaffirms its commitment to publishing mental science and advancing psychiatric knowledge that improves the mental health of one and all.
Blast injuries can occur by a multitude of mechanisms, including improvised explosive devices (IEDs), military munitions, and accidental detonation of chemical or petroleum stores. These injuries disproportionately affect people in low- and middle-income countries (LMICs), where there are often fewer resources to manage complex injuries and mass-casualty events.
Study Objective:
The aim of this systematic review is to describe the literature on the acute facility-based management of blast injuries in LMICs to aid hospitals and organizations preparing to respond to conflict- and non-conflict-related blast events.
Methods:
A search of Ovid MEDLINE, Scopus, Global Index Medicus, Web of Science, CINAHL, and Cochrane databases was used to identify relevant citations from January 1998 through July 2024. This systematic review was conducted in adherence with PRISMA guidelines. Data were extracted and analyzed descriptively. A meta-analysis calculated the pooled proportions of mortality, hospital admission, intensive care unit (ICU) admission, intubation and mechanical ventilation, and emergency surgery.
Results:
Reviewers screened 3,731 titles and abstracts and 173 full texts. Seventy-five articles from 22 countries were included for analysis. Only 14.7% of included articles came from low-income countries (LICs). Sixty percent of studies were conducted in tertiary care hospitals. The mean proportion of patients who were admitted was 52.1% (95% CI, 0.376 to 0.664). Among all in-patients, 20.0% (95% CI, 0.124 to 0.288) were admitted to an ICU. Overall, 38.0% (95% CI, 0.256 to 0.513) of in-patients underwent emergency surgery and 13.8% (95% CI, 0.023 to 0.315) were intubated. Pooled in-patient mortality was 9.5% (95% CI, 0.046 to 0.156) and total hospital mortality (including emergency department [ED] mortality) was 7.4% (95% CI, 0.034 to 0.124). There were no significant differences in mortality when stratified by country income level or hospital setting.
Conclusion:
Findings from this systematic review can be used to guide preparedness and resource allocation for acute care facilities. Pooled proportions for mortality and other outcomes described in the meta-analysis offer a metric by which future researchers can assess the impact of blast events. Under-representation of LICs and non-tertiary care medical facilities and significant heterogeneity in data reporting among published studies limited the analysis.
This review aims to highlight the relative importance of cardiovascular disease (CVD) lifestyle-associated risk factors among individuals with inflammatory bowel disease (IBD) and examine the effectiveness of lifestyle interventions to improve these CVD risk factors. Adults with IBD are at higher risk of CVD due to systemic and gut inflammation. Besides that, tobacco smoking, dyslipidaemia, hypertension, obesity, physical inactivity and poor diet can also increase CVD risk. Typical IBD behavioural modification including food avoidance and reduced physical activity, as well as frequent corticosteroid use, can further increase CVD risk. We reviewed seven studies and found that there is insufficient evidence to conclude the effects of diet and/or physical activity interventions on CVD risk outcomes among populations with IBD. However, the limited findings suggest that people with IBD can adhere to a healthy diet or Mediterranean diet (for which there is most evidence) and safely participate in moderately intense aerobic and resistance training to potentially improve anthropometric risk factors. This review highlights the need for more robust controlled trials with larger sample sizes to assess and confirm the effects of lifestyle interventions to mitigate modifiable CVD risk factors among the IBD population.
Patients with inflammatory bowel disease (IBD) have higher risk of developing cardiometabolic diseases due to chronic gut and systemic inflammation which promotes atherogenesis. Adopting healthy lifestyle habits can prevent development of cardiometabolic diseases, but can be challenging for people with IBD. The IBD exercise and diet (IBDeat) habits study describes the lifestyle habits and cardiometabolic disease risk factors of adults with IBD in Aotearoa, New Zealand (NZ).
This is a cross-sectional study including adult NZ IBD patients recruited online via Crohn’s and Colitis NZ and Dunedin hospital from 2021 to 2022. An online questionnaire collected demographics, smoking status, comorbidities, medications, disease severity scores, quality of life, physical activity, and dietary intake. The Dunedin cohort had physical measurements taken including anthropometrics, handgrip strength, blood pressure, body composition (bioelectrical impedance), blood nutritional markers, and faecal calprotectin. Data were compared to established reference values and linear regression analysis investigated associations between lifestyle habits and cardiometabolic risk factors. The study received University of Otago ethical approval (reference: H21/135). A total of 213 adults with IBD (54% Crohn’s disease; 46% ulcerative colitis) completed the online questionnaire and a subset of 102 from Dunedin provided physical measurements. Participants characteristics were: median age 37 (IQR 25, 51) years, 71% female, 82% NZ European, 4% smokers, and 1.4% had active IBD. Thirty-five percent of participants had at least one comorbidity and 34% of participants had poor quality of life. Known dietary risk factors associated with cardiometabolic diseases were common: low intakes of vegetables (77%), fruit (51%), fibre (35%) and high intakes of total fat (84%) and saturated fat (98%). Physical activity recommendations were met by 61% of participants and 63% reported barriers to being more active from fatigue (63%) and joint pain (54%). Other cardiometabolic risk factors were common in the Dunedin cohort: high LDL (79%) and total cholesterol (76%), central adiposity (64%), high body fat percentage (44%), high blood pressure (26%), and low handgrip strength (25%). Regression analysis showed that vegetable (per serve) and carbohydrate (per 5% of total daily energy intake (TE)) were associated with 0.22 mmol/L (95%CI 0.43, 0.013) and 0.20 mmol/L (95%CI 0.34, 0.057) lower LDL cholesterol. Discretionary food items were associated with higher LDL cholesterol, 0.11 mmol/L per daily serve (95%CI 0.028, 0.19). A 5% difference in TE intake from carbohydrate was associated with 1.11% (95%CI 2.22%, 0.0038%) lower body fat percentage while protein was associated with 3.1% (95%CI 0.81%, 5.39%) higher body fat percentage. Physical activity had weak associations with cardiometabolic disease risk factors. Adults with IBD have multiple modifiable risk factors for cardiometabolic diseases. Vegetable and carbohydrate intake were associated with lower LDL cholesterol concentration while discretionary food items showed otherwise. Protein intake was associated with higher body fat percentage.
Inflammatory Bowel Diseases (IBD) are chronic intestinal disorders, characterised by periods of quiescent disease and episodes of heightened disease activity. The diseases mainly affect the gastrointestinal tract. Often, patients experience a limited quality of life as a result of dietary restrictions, fatigue and other factors leading to mood disturbances, malnutrition, and inactivity amongst others. This presentation will give an overview of work done to identify factors leading the above findings which in our view are to some degree modifiable. We will look at availability and expertise of dietitians supporting patients with IBD, dietary and lifestyle modifications aiming to reduce the Burden of Disease.
The excitation conditions of the magnetorotational instability (MRI) are studied for axially unbounded Taylor–Couette (TC) flows of various gap widths between the cylinders. The cylinders are considered as made from both perfect-conducting or insulating material and the conducting fluid with a finite but small magnetic Prandtl number rotates with a quasi-Keplerian velocity profile. The solutions are optimized with respect to the wavenumber and the Reynolds number of the rotation of the inner cylinder. For the axisymmetric modes, we find the critical Lundquist number of the applied axial magnetic field: the lower, the wider the gap between the cylinders. A similar result is obtained for the induced cell structure: the wider the gap, the more spherical the cells are. The marginal rotation rate of the inner cylinder – for a fixed size of the outer cylinder – always possesses a minimum for not too wide and not too narrow gap widths. For perfect-conducting walls the minimum lies at $r_{{\rm in}}\simeq 0.4$, where $r_{{\rm in}}$ is the ratio of the radii of the two rotating cylinders. The lowest magnetic field amplitudes to excite the instability are required for TC flows between perfect-conducting cylinders with gaps corresponding to $r_{{\rm in}}\simeq ~0.2$. For even wider and also for very thin gaps the needed magnetic fields and rotation frequencies are shown to become rather huge. Also the non-axisymmetric modes with $|m|=1$ have been considered. Their excitation generally requires stronger magnetic fields and higher magnetic Reynolds numbers in comparison with those for the axisymmetric modes. If TC experiments with too slow rotation for the applied magnetic fields yield unstable modes of any azimuthal symmetry, such as the currently reported Princeton experiment (Wang et al., Phys. Rev. Lett., vol. 129, 115001), then also other players, including axial boundary effects, than the MRI-typical linear combination of current-free fields and differential rotation should be in the game.
Accumulating evidence suggests that corpus callosum development is critically involved in the emergence of behavioral and cognitive skills during the first two years of life and that structural abnormalities of the corpus callosum are associated with a variety of neurodevelopmental disorders. Indeed by adulthood ∼30% of individuals with agenesis of the corpus callosum (ACC), a congenital condition resulting in a partial or fully absent corpus callosum, exhibit phenotypic features consistent with autism spectrum disorder (ASD). However, very little is known about developmental similarities and/or differences among infants with ACC and infants who develop ASD. This study describes temperament in infants with ACC during the first year of life in comparison with a neurotypical control group. Additionally, it examines the potential contribution of disrupted callosal connectivity to early expression of temperament in ASD through comparison to children with high familial likelihood of ASD.
Participants and Methods:
Longitudinal ratings of positive and negative emotionality were acquired at 6 and 12 months on the Infant Behavior Questionnaire-Revised across four groups of infants: isolated complete and partial ACC (n=104), high familial likelihood of ASD who do and do not have a confirmed ASD diagnosis (HL+ n=81, HL- n=282), and low-likelihood controls (LL- n=152).
Results:
Overall, the ACC group demonstrated blunted affect, with significantly lower positive and negative emotionality than LL controls at both timepoints. Specifically, the ACC group exhibited lower activity and approach dimensions of positive emotionality at both timepoints, with lower high-intensity pleasure at 6 months and lower vocal reactivity at 12 months. On negative emotionality subscales, the ACC group exhibited lower distress to limitations and sadness at both timepoints, as well as lower falling reactivity at 6 months. The ACC and HL groups did not differ significantly on positive emotionality at either timepoint. However, negative emotionality was lower in the ACC group than the HL- group at both timepoints and lower than the HL+ group at 12 months, with lower distress to limitations and sadness ratings than both HL groups at both timepoints.
Conclusions:
These findings highlight the importance of interhemispheric connections in facilitating active engagement and pursuit of pleasurable activities during the first year of life, as well as expression of sadness and distress to limitations. Notably, similarities between infants with ACC and infants at elevated familial risk of ASD suggest that disrupted callosal connectivity may specifically contribute to reductions in positive emotionality.
It is unclear how agenesis of the corpus callosum (ACC), a congenital brain malformation defined by complete or partial absence of the corpus callosum, impacts language development. fMRI studies of middle childhood suggest that the corpus callosum plays a role in the interhemispheric language network (Bartha-Doering et al., 2020), and that reduced interhemispheric functional connectivity is correlated with worse language abilities in children with ACC (Bartha-Doering et al., 2021). Additionally, accumulating evidence suggests structural abnormalities of the corpus callosum play a role in neurodevelopmental disorders. While children who go on to receive an autism spectrum disorder (ASD) diagnosis may show early signs of altered word and gesture acquisition (Iverson et al., 2018), the same is not known about ACC. This study examined language development during the second year of life in children with ACC in comparison to neurotypical control participants, as well as other children at elevated risk of ASD.
Participants and Methods:
The MacArthur-Bates Communicative Development Inventories (MCDI): Words and Gestures scales were administered to parents of 74 children with isolated ACC at 12, 18 and 24 months of age. Children whose first language was not English and children who were bilingual were excluded. Comparison groups consisted of individuals with a low familial likelihood of ASD (LL- n=140) and individuals with high familial likelihood of ASD who do and do not have a confirmed ASD diagnosis (HL+ n=68, HL- n=256).
Results:
Compared to LL controls, the ACC group produced fewer words at 18 and 24 months of age, and demonstrated fewer words understood at all three timepoints. Similarly, compared to the HL- group, the ACC group demonstrated fewer words produced and understood at 18 months of age, and fewer words produced at 24 months of age. The ACC and HL+ groups did not differ in words produced or words understood at any timepoint.
Conclusions:
Overall, infants with ACC demonstrated delayed vocabulary expansion from 12 to 24 months of age. These findings illustrate the role of callosal connectivity in the development of language across the first 2 years of life, and highlight the need for support and interventions that target vocabulary production and comprehension.
Baseline assessment of cognitive performance is common practice under many concussion management protocols and is required for collegiate athletes by the NCAA. The purpose of baseline cognitive assessment is to understand an athlete’s individual uninjured cognitive performance, as opposed to using population normative data. This baseline can then serve as a reference point for recovery after concussion and can inform return-to-play decisions. However, multiple factors, including lack of effort, can contribute to misrepresentation of baseline results which raises concern for reliability during return-to-play decision-making. Measuring effort across a continuum, rather than as a dichotomous variable (good versus poor effort) may provide informative insight related to cognitive performance at baseline.
Participants and Methods:
Collegiate athletes (n = 231) completed the Immediate Post-Concussion Assessment and Cognitive Test (ImPACT) as part of their baseline pre-participation concussion evaluation. ImPACT creates composite scores of Verbal Memory, Visual Memory, Visual-Motor Speed, and Reaction Time. Baseline self-reported symptoms and total hours of sleep the night prior to testing are also collected through ImPACT. ImPACT has one embedded indicator within the program to assess effort, and research has identified an additional three embedded indicators. Athletes were also administered one stand-alone performance validity test, either the Medical Symptom Validity Test (n = 130) or the Rey Dot Counting Test (n = 101), to independently measure effort. Effort was estimated across a continuum (zero, one, two, or three or more failed effort indicators) with both stand-alone and embedded effort indicators. We evaluated the relationship between effort, symptoms, self-reported sleep, Reaction Time composite score and Visual-Motor Speed composite score using a linear regression model.
Results:
We found that 121 athletes passed all effort indicators, while 39 athletes failed only one effort indicator, 40 athletes failed two effort indicators, and 31 athletes failed three or four (three+) effort indicators. Self-reported symptoms and total hours of sleep were not related to effort, but Reaction Time and VisualMotor Speed composites were. Specifically, performance on the Visual-Motor Speed composite was significantly worse for athletes who failed two or three+ effort indicators compared to athletes who did not fail any, and performance on the Reaction Time composite was significantly worse only for athletes who failed three+ effort indicators. Additionally, athletes who failed one or more effort indicators and reported less sleep performed worse on both the Visual-Motor Speed and Reaction Time composites, compared to those who reported less sleep and did not fail any effort indicators.
Conclusions:
Athletes who failed one effort indicator did not perform significantly worse on Reaction Time and Visual-Motor Speed composites compared to those who passed all effort indicators. However, 31% of athletes failed two or more effort indicators and these athletes performed worse on cognitive tests, likely due to factors impacting their ability to put forth good effort. These results suggest that effort is more complex than a previously used dichotomous variable and highlights the importance of using several indicators of effort throughout baseline assessments. In addition, the importance of sleep should be emphasized during baseline assessments, especially when effort is questionable.
Differences in adaptive functioning present early in development for many children with monogenic (Down Syndrome, Fragile X) and neurodevelopmental disorders. At this time, it is unclear whether children with ACC present with early adaptive delays, or if difficulties emerge later as functional tasks become more complex. While potential delays in motor development are frequently reported, other domains such as communication, social and daily living skills are rarely described. We used a prospective, longitudinal design to examine adaptive behavior from 6-24 months in children with ACC and compared their trajectories to those with monogenic and neurodevelopmental conditions.
Participants and Methods:
Our sample included children with primary ACC (n= 27-47 depending on time point) whose caregivers completed the Vineland Adaptive Behavior Scales-Interview 3rd Edition, via phone at 6, 12, 18 and 24 months. Comparison samples (using the Vineland-2) included children with Down Syndrome (DS; n = 15-56), Fragile X (FX; n = 15-20), children at high familial likelihood for autism (HL-; n=192-280), and low likelihood (LL; no family history of autism and no developmental/behavioral diagnosis; n = 111196). A subset of the HL children received an autism diagnosis (HL+; n = 48-74). The DS group did not have an 18-month Vineland.
Results:
A series of linear mixed model analyses (using maximum likelihood) for repeated measures was used to compare groups on three Vineland domains at 6, 12, 18 and 24 month timepoints). All fixed factors (diagnostic group, timepoint, and group X timepoint interaction) accounted for significant variance on all Vineland domains (p < .001). Post hoc comparisons with Bonferroni-correction examined ACC Vineland scores compared to the other diagnostic groups at each timepoint. At 6 months, parent-ratings indicated the ACC group had significantly weaker skills than the LL group in Communication and Motor domains. At 12, 18 and 24 months, ratings revealed weaker Communication, Daily Living and Motor skills in the ACC group compared to both the LL and HL- groups. Compared to the other clinical groups, the ACC group had stronger Socialization and Motor skills than Fragile X at 6 months, and at 24 months had stronger Communication and Socialization skills than both the DS and FX groups, as well as stronger Socialization than the HL+ group.
Conclusions:
Compared to children with low likelihood of ASD, children with primary ACC reportedly have weaker Communication and Motor skills from 6 to 24 months, with weakness in Daily Living Skills appearing at 12 months and all differences increase with age. Compared to Fragile X, the ACC exhibited relative strengths in socialization and motor skills starting at 6 months. By 24 months, the ACC group was outperforming the monogenic groups on Socialization and Communication. In general, the ACC scores were consistent with the HL+ sample, except the ACC group had stronger Social skills at 18 and 24 months. The results clearly inform the need for early intervention in the domains of motor and language skills. Additionally, as we know that children with ACC are at increased risk for social difficulties, research is needed both using more fine-grained social-communication tools, and following children from infancy through middle childhood.
Intravesical Bacillus Calmette-Guérin (BCG) is a standard therapy for non–muscle-invasive bladder cancer used in urology clinics and inpatient settings. We present a review of infection risks to patients receiving intravesical BCG, healthcare personnel who prepare and administer BCG, and other patients treated in facilities where BCG is prepared and administered. Knowledge of these risks and relevant regulations informs appropriate infection prevention measures.
The Kaya forests in Southern Kenya are valuable habitats for rare animal and plant species and provide various ecosystem services. The Kaya forests are also centres of cultural life and are of great relevance to rites, traditions, and the social order of the community of people. During the past decades, these forest remnants become under extreme pressure due to land use and resource exploitation and are in danger of disappearing completely during the next years. This negative trend is progressing with the increasing population density. In addition, the relevance of the former cultural rites is increasingly being forgotten, and with it the relevance of these places. In order to preserve these forest remnants in the long term, it is important to make the population aware of the numerous and valuable ecosystem services, as well as to bring the former cultural life back into the centre of society. A general prerequisite to efficiently conserve Kayas might be the improvement of communication among generations, such as between the elders of Kayas and the youth, as well as among elders from different Kayas to harmonize conservation strategies and the sustainable use of these forest remnants. In addition, strengthening the communication between state institutions and the elders of the individual Kayas might help to find a common strategy to conserve Kaya forests.
Pre-diagnostic deficits in social motivation are hypothesized to contribute to autism spectrum disorder (ASD), a heritable neurodevelopmental condition. We evaluated psychometric properties of a social motivation index (SMI) using parent-report item-level data from 597 participants in a prospective cohort of infant siblings at high and low familial risk for ASD. We tested whether lower SMI scores at 6, 12, and 24 months were associated with a 24-month ASD diagnosis and whether social motivation’s course differed relative to familial ASD liability. The SMI displayed good internal consistency and temporal stability. Children diagnosed with ASD displayed lower mean SMI T-scores at all ages and a decrease in mean T-scores across age. Lower group-level 6-month scores corresponded with higher familial ASD liability. Among high-risk infants, strong decline in SMI T-scores was associated with 10-fold odds of diagnosis. Infant social motivation is quantifiable by parental report, differentiates children with versus without later ASD by age 6 months, and tracks with familial ASD liability, consistent with a diagnostic and susceptibility marker of ASD. Early decrements and decline in social motivation indicate increased likelihood of ASD, highlighting social motivation’s importance to risk assessment and clarification of the ontogeny of ASD.
Based on high rates of non-converters to psychosis, especially in children and adolescents, it was suggested that CHR criteria were (1) pluripotential, (2) a transdiagnostic risk factor, or (3) simply a severity marker of mental disorders rather than specifically psychosis-predictive. If any of these three alternative explanatory models were true, their prevalence should differ between persons with and without mental disorders, and their severity should be associated with functional impairment as a measure of severity.
Objectives
To compare the prevalence and severity of CHR criteria/symptoms in children and adolescents of the community and inpatients.
Methods
We compared CHR criteria/symptoms in 8-17-year-olds of the community and of inpatients not clinically suspected to develop psychosis.
Results
The 7.3%-prevalence rate of CHR criteria in community subjects did not differ significantly from the 9.5%-rate in inpatients. Frequency/severity of CHR criteria never differed between the community and the four inpatient groups, while the frequency and severity of CHR symptoms differed only minimally. Group differences were found in only four CHR symptoms: suspiciousness/persecutory ideas of the SIPS, and thought pressure, derealization and visual perception disturbances of the SPI-CY. These were consistent with a transdiagnostic risk factor or dimension, i.e., displayed higher frequency and severity in inpatients. Low functioning, however, was at most weakly related to the severity of CHR criteria/symptoms, with the highest, yet still weak correlation yielded for suspiciousness/persecutory ideas.
Conclusions
The lack of systematic differences between inpatients and community subjects does not support suggestions that CHR criteria/symptoms are pluripotential or transdiagnostic syndromes, or merely markers of symptom severity