We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We present the first results from a new backend on the Australian Square Kilometre Array Pathfinder, the Commensal Realtime ASKAP Fast Transient COherent (CRACO) upgrade. CRACO records millisecond time resolution visibility data, and searches for dispersed fast transient signals including fast radio bursts (FRB), pulsars, and ultra-long period objects (ULPO). With the visibility data, CRACO can localise the transient events to arcsecond-level precision after the detection. Here, we describe the CRACO system and report the result from a sky survey carried out by CRACO at 110-ms resolution during its commissioning phase. During the survey, CRACO detected two FRBs (including one discovered solely with CRACO, FRB 20231027A), reported more precise localisations for four pulsars, discovered two new RRATs, and detected one known ULPO, GPM J1839 $-$10, through its sub-pulse structure. We present a sensitivity calibration of CRACO, finding that it achieves the expected sensitivity of 11.6 Jy ms to bursts of 110 ms duration or less. CRACO is currently running at a 13.8 ms time resolution and aims at a 1.7 ms time resolution before the end of 2024. The planned CRACO has an expected sensitivity of 1.5 Jy ms to bursts of 1.7 ms duration or less and can detect $10\times$ more FRBs than the current CRAFT incoherent sum system (i.e. 0.5 $-$2 localised FRBs per day), enabling us to better constrain the models for FRBs and use them as cosmological probes.
Two studies were conducted in 2022 and 2023 near Rocky Mount and Clayton, NC, to determine the optimal granular ammonium sulfate (AMS) rate and application timing for pyroxasulfone-coated AMS. In the rate study, AMS rates included 161, 214, 267, 321, 374, 428, and 481 kg ha−1, equivalent to 34, 45, 56, 67, 79, 90, and 101 kg N ha−1, respectively. All rates were coated with pyroxasulfone at 118 g ai ha−1 and topdressed onto 5- to 7-leaf cotton. In the timing study, pyroxasulfone (118 g ai ha−1) was coated on AMS and topdressed at 321 kg ha−1 (67 kg N ha−1) onto 5- to 7-leaf, 9- to 11-leaf, and first bloom cotton. In both studies, weed control and cotton tolerance to pyroxasulfone-coated AMS were compared to pyroxasulfone applied POST and POST-directed. The check in both studies received non-herbicide-treated AMS (321 kg ha−1). Before treatment applications, all plots (including the check) were maintained weed-free with glyphosate and glufosinate. In both studies, pyroxasulfone applied POST was most injurious (8% to 16%), while pyroxasulfone-coated AMS resulted in ≤4% injury. Additionally, no differences in cotton lint yield were observed in either study. With the exception of the lowest rate of AMS (161 kg ha−1; 79%), all AMS rates coated with pyroxasulfone controlled Palmer amaranth ≥83%, comparably to pyroxasulfone applied POST (92%) and POST-directed (89%). In the timing study, the application method did not affect Palmer amaranth control; however, applications made at the mid- and late timings outperformed early applications. These results indicate that pyroxasulfone-coated AMS can control Palmer amaranth comparably to pyroxasulfone applied POST and POST-directed, with minimal risk of cotton injury. However, the application timing could warrant additional treatment to achieve adequate late-season weed control.
Transdisciplinary sustainability scientists work with many different actors in pursuit of change. In so doing they make choices about why and how to engage with different perspectives in their research. Reflexivity – active individual and collective critical reflection – is considered an important capacity for researchers to address the resulting ethical and practical challenges. We developed a framework for reflexivity as a transformative capacity in sustainability science through a critical systems approach, which helps make any decisions that influence which perspectives are included or excluded in research explicit. We suggest that transdisciplinary sustainability research can become more transformative by nurturing reflexivity.
Technical summary
Transdisciplinary sustainability science is increasingly applied to study transformative change. Yet, transdisciplinary research involves diverse actors who hold contrasting and sometimes conflicting perspectives and worldviews. Reflexivity is cited as a crucial capacity for navigating the resulting challenges, yet notions of reflexivity are often focused on individual researcher reflections that lack explicit links to the collective transdisciplinary research process and predominant modes of inquiry in the field. This gap presents the risk that reflexivity remains on the periphery of sustainability science and becomes ‘unreflexive’, as crucial dimensions are left unacknowledged. Our objective was to establish a framework for reflexivity as a transformative capacity in sustainability science through a critical systems approach. We developed and refined the framework through a rapid scoping review of literature on transdisciplinarity, transformation, and reflexivity, and reflection on a scenario study in the Red River Basin (US, Canada). The framework characterizes reflexivity as the capacity to nurture a dynamic, embedded, and collective process of self-scrutiny and mutual learning in service of transformative change, which manifests through interacting boundary processes – boundary delineation, interaction, and transformation. The case study reflection suggests how embedding this framework in research can expose boundary processes that block transformation and nurture more reflexive and transformative research.
Social media summary
Transdisciplinary sustainability research may become more transformative by nurturing reflexivity as a dynamic, embedded, and collective learning process.
Education can be viewed as a control theory problem in which students seek ongoing exogenous input—either through traditional classroom teaching or other alternative training resources—to minimize the discrepancies between their actual and target (reference) performance levels. Using illustrative data from \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$n=784$$\end{document} Dutch elementary school students as measured using the Math Garden, a web-based computer adaptive practice and monitoring system, we simulate and evaluate the outcomes of using off-line and finite memory linear quadratic controllers with constraintsto forecast students’ optimal training durations. By integrating population standards with each student’s own latent change information, we demonstrate that adoption of the control theory-guided, person- and time-specific training dosages could yield increased training benefits at reduced costs compared to students’ actual observed training durations, and a fixed-duration training scheme. The control theory approach also outperforms a linear scheme that provides training recommendations based on observed scores under noisy and the presence of missing data. Design-related issues such as ways to determine the penalty cost of input administration and the size of the control horizon window are addressed through a series of illustrative and empirically (Math Garden) motivated simulations.
Multi-site and multi-organizational teams are increasingly common in epidemiologic research; however, there is a lack of standards or best practices for achieving success in collaborative research networks in epidemiology. We summarize our experiences and lessons learned from the Diabetes Location, Environmental Attributes, and Disparities (LEAD) Network, a collaborative agreement between the Centers for Disease Control and Prevention and research teams at Drexel University, New York University, Johns Hopkins University and Geisinger, and the University of Alabama at Birmingham. We present a roadmap for success in collaborative epidemiologic research, with recommendations focused on the following areas to maximize efficiency and success in collaborative research agreements: 1) operational and administrative considerations; 2) data access and sharing of sensitive data; 3) aligning network research aims; 4) harmonization of methods and measures; and 5) dissemination of findings. Future collaborations can be informed by our experiences and ultimately dedicate more resources to achieving scientific aims and efficiently disseminating scientific work products.
An experiment was conducted in 2022 and 2023 near Rocky Mount and Clayton, NC, to evaluate residual herbicide-coated fertilizer for cotton tolerance and Palmer amaranth control. Treatments included acetochlor, atrazine, dimethenamid-P, diuron, flumioxazin, fluometuron, fluridone, fomesafen, linuron, metribuzin, pendimethalin, pyroxasulfone, pyroxasulfone + carfentrazone, S-metolachlor, and sulfentrazone. Each herbicide was individually coated on granular ammonium sulfate (AMS) and top-dressed at 321 kg ha−1 (67 kg N ha−1) onto 5- to 7-leaf cotton. The check plots received the equivalent rate of nonherbicide-treated AMS. Before top-dress, all plots (including the check) were treated with glyphosate and glufosinate to control previously emerged weeds. All herbicides except metribuzin resulted in transient cotton injury. Cotton response to metribuzin varied by year and location. In 2022, metribuzin caused 11% to 39% and 8% to 17% injury at the Clayton and Rocky Mount locations, respectively. In 2023, metribuzin caused 13% to 32% injury at Clayton and 73% to 84% injury at Rocky Mount. Pyroxasulfone (91%), pyroxasulfone + carfentrazone (89%), fomesafen (87%), fluridone (86%), flumioxazin (86%), and atrazine (85%) controlled Palmer amaranth ≥85%. Pendimethalin and fluometuron were the least effective treatments, resulting in 58% and 62% control, respectively. As anticipated, early season metribuzin injury translated into yield loss; plots treated with metribuzin yielded 640 kg ha−1 and were comparable to yields after linuron (790 kg ha−1) was used. These findings suggest that with the exception of metribuzin, residual herbicides coated onto AMS may be suitable and effective in cotton production, providing growers with additional modes of action for late-season control of multiple herbicide–resistant Palmer amaranth.
Major depressive disorder (MDD) is the leading cause of disability globally, with moderate heritability and well-established socio-environmental risk factors. Genetic studies have been mostly restricted to European settings, with polygenic scores (PGS) demonstrating low portability across diverse global populations.
Methods
This study examines genetic architecture, polygenic prediction, and socio-environmental correlates of MDD in a family-based sample of 10 032 individuals from Nepal with array genotyping data. We used genome-based restricted maximum likelihood to estimate heritability, applied S-LDXR to estimate the cross-ancestry genetic correlation between Nepalese and European samples, and modeled PGS trained on a GWAS meta-analysis of European and East Asian ancestry samples.
Results
We estimated the narrow-sense heritability of lifetime MDD in Nepal to be 0.26 (95% CI 0.18–0.34, p = 8.5 × 10−6). Our analysis was underpowered to estimate the cross-ancestry genetic correlation (rg = 0.26, 95% CI −0.29 to 0.81). MDD risk was associated with higher age (beta = 0.071, 95% CI 0.06–0.08), female sex (beta = 0.160, 95% CI 0.15–0.17), and childhood exposure to potentially traumatic events (beta = 0.050, 95% CI 0.03–0.07), while neither the depression PGS (beta = 0.004, 95% CI −0.004 to 0.01) or its interaction with childhood trauma (beta = 0.007, 95% CI −0.01 to 0.03) were strongly associated with MDD.
Conclusions
Estimates of lifetime MDD heritability in this Nepalese sample were similar to previous European ancestry samples, but PGS trained on European data did not predict MDD in this sample. This may be due to differences in ancestry-linked causal variants, differences in depression phenotyping between the training and target data, or setting-specific environmental factors that modulate genetic effects. Additional research among under-represented global populations will ensure equitable translation of genomic findings.
Understanding characteristics of healthcare personnel (HCP) with SARS-CoV-2 infection supports the development and prioritization of interventions to protect this important workforce. We report detailed characteristics of HCP who tested positive for SARS-CoV-2 from April 20, 2020 through December 31, 2021.
Methods:
CDC collaborated with Emerging Infections Program sites in 10 states to interview HCP with SARS-CoV-2 infection (case-HCP) about their demographics, underlying medical conditions, healthcare roles, exposures, personal protective equipment (PPE) use, and COVID-19 vaccination status. We grouped case-HCP by healthcare role. To describe residential social vulnerability, we merged geocoded HCP residential addresses with CDC/ATSDR Social Vulnerability Index (SVI) values at the census tract level. We defined highest and lowest SVI quartiles as high and low social vulnerability, respectively.
Results:
Our analysis included 7,531 case-HCP. Most case-HCP with roles as certified nursing assistant (CNA) (444, 61.3%), medical assistant (252, 65.3%), or home healthcare worker (HHW) (225, 59.5%) reported their race and ethnicity as either non-Hispanic Black or Hispanic. More than one third of HHWs (166, 45.2%), CNAs (283, 41.7%), and medical assistants (138, 37.9%) reported a residential address in the high social vulnerability category. The proportion of case-HCP who reported using recommended PPE at all times when caring for patients with COVID-19 was lowest among HHWs compared with other roles.
Conclusions:
To mitigate SARS-CoV-2 infection risk in healthcare settings, infection prevention, and control interventions should be specific to HCP roles and educational backgrounds. Additional interventions are needed to address high social vulnerability among HHWs, CNAs, and medical assistants.
OBJECTIVES/GOALS: We are investigating the role of IL-6 in regulating renal function by measuring mean arterial pressure (MAP), renal plasma flow (RPF) and glomerular filtration rate (GFR) in wild type (WT) and IL-6-knockout (KO) mice in established mouse models of angiotensin II (AII)-dependent- hypertension and -salt-sensitive hypertension. METHODS/STUDY POPULATION: Twelve-week-old male WT and KO mice on the C57BL6 background strain were infused with vehicle (V; saline) or angiotensin II (AII; 200 ng/kg/min) for 12-14 days. Half of the AII-treatment groups were maintained on a high salt (HS; 6% NaCl) diet for the duration of the experiment, while the other half of the AII treatment groups and both vehicle groups were fed normal rat chow. MAP was continuously measured by a fluid filled catheter in conscious mice for the duration of the experiment. RPF and GFR were measured on days 12-14 in anesthetized mice by the para-aminohippurate, and fluorescein isothiocynate-Inulin techniques, respectively. All data were analyzed by 2-way ANOVA; *p<0.05 vs. WT, same treatment; #p<0.05 vs.V, same genotype; ^p<0.05, AII vs. AII+HS, same genotype. RESULTS/ANTICIPATED RESULTS: MAP was 31% lower in KO vs WT mice. AII increased MAP (1.2-fold) in WT but not KO mice. HS diet magnified AII-induced increases in MAP in WT and moderately increased MAP in AII-KO mice: [MAP (mmHg): WT+V, 130±7.0; KO+V, 91.0+4.0*; WT+AII, 153±5.0#; KO+AII, 83.0±4.0*; WT+AII+HS, 150±11#; KO+AII+HS, 93.0±4.0#]. AII infusion reduced RPF in the KO but not WT mice. Addition of HS reduced RPF in WT and exacerbated AII-induced reductions in RPF in KO mice [RPF (ml/min/g): WT+V, 1.82±0.23; KO+V, 1.91+0.40; WT+AII, 3.16±0.75#; KO+AII, 1.65±0.42*; WT+AII+HS, 1.10±0.31#^; KO+AII+HS, 1.13±0.XX#^]. The HS diet reduced GFR in AII-infused KO but not WT mice [GFR (µl/min/g): WT+V, 756±XX; KO+V, 788±XX; WT+AII, 1010±63*#; KO+AII, 756±23*; WT+AII+HS, 1100±150#; KO+AII+HS: 540±210*#^]. DISCUSSION/SIGNIFICANCE: The absence of IL-6 in male mice attenuated AII- and/or AII+HS-induced increases in MAP; however, it exacerbated HS-induced reductions in RPF and GFR. These findings suggest inhibiting IL-6 has therapeutic potential as an antihypertensive but not as a renal protective agent in hypertension and salt-sensitive hypertension disease states.
Individuals often use self-directed strategies to manage intake of tempting foods, but what these strategies are and whether they are effective is not well understood. This study assessed the frequency of use and subjective effectiveness of self-directed strategies in relation to BMI and snack intake.
Design:
A cross-sectional and prospective study with three time points (T1: baseline, T2: 3 months and T3: 3 years). At T1, demographics, frequency of use and subjective effectiveness of forty-one identified strategies were assessed. At T2 and T3, current weight was reported, and at T2 frequency of snack intake was also recorded.
Setting:
Online study in the UK.
Participants:
Data from 368 participants (Mage = 34·41 years; MBMI = 25·06 kg/m2) were used for analysis at T1, n = 170 (46·20 % of the total sample) at T2 and n = 51 (13·59 %) at T3.
Results:
Two strategy factors were identified via principal axis factoring: (1) diet, exercise, reduction of temptations, and cognitive strategies, and (2) planning, preparation and eating style. For strategy 1, frequency of use, but not subjective effectiveness, was positively related to BMI at T1. Subjective effectiveness predicted an increase in BMI from T1 and T2 to T3. No relationship to snack intake was found. For strategy 2, frequency of use was negatively related to BMI at T1. Neither frequency of use nor subjective effectiveness were related to changes in BMI over time, but subjective effectiveness was negatively correlated with unhealthy snack intake.
Conclusion:
Self-directed strategies to reduce the intake of tempting foods are not consistently related to BMI or snack intake.
Nursing home residents may be particularly vulnerable to coronavirus disease 2019 (COVID-19). Therefore, a question is when and how often nursing homes should test staff for COVID-19 and how this may change as severe acute respiratory coronavirus virus 2 (SARS-CoV-2) evolves.
Design:
We developed an agent-based model representing a typical nursing home, COVID-19 spread, and its health and economic outcomes to determine the clinical and economic value of various screening and isolation strategies and how it may change under various circumstances.
Results:
Under winter 2023–2024 SARS-CoV-2 omicron variant conditions, symptom-based antigen testing averted 4.5 COVID-19 cases compared to no testing, saving $191 in direct medical costs. Testing implementation costs far outweighed these savings, resulting in net costs of $990 from the Centers for Medicare & Medicaid Services perspective, $1,545 from the third-party payer perspective, and $57,155 from the societal perspective. Testing did not return sufficient positive health effects to make it cost-effective [$50,000 per quality-adjusted life-year (QALY) threshold], but it exceeded this threshold in ≥59% of simulation trials. Testing remained cost-ineffective when routinely testing staff and varying face mask compliance, vaccine efficacy, and booster coverage. However, all antigen testing strategies became cost-effective (≤$31,906 per QALY) or cost saving (saving ≤$18,372) when the severe outcome risk was ≥3 times higher than that of current omicron variants.
Conclusions:
SARS-CoV-2 testing costs outweighed benefits under winter 2023–2024 conditions; however, testing became cost-effective with increasingly severe clinical outcomes. Cost-effectiveness can change as the epidemic evolves because it depends on clinical severity and other intervention use. Thus, nursing home administrators and policy makers should monitor and evaluate viral virulence and other interventions over time.
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:
This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:
The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:
Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:
This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:
Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:
The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Social functioning patterns vary across measures in children with neurofibromatosis type 1 (NF1; Glad et al., 2021) with broad psychosocial screening measures having shown no impairment (Klein-Tasman et al., 2014; Martin et al., 2012; Sangster et al., 2011) while a more specific social functioning measure indicated poorer social skills (Barton & North, 2004; Huijbregts & de Sonneville, 2011; Loitfelder et al., 2015). The current aims were to characterize caregiver-reported social skills using three different measures and determine which measure appears to best capture social difficulties for young children with NF1.
Participants and Methods:
Fifty children with NF1 (31 males; M=3.96, SD=1.05) and 20 unaffected siblings (11 males; M=4.34, SD=0.88) in early childhood (ages 3-6) were rated by a caregiver on one social functioning measure (the Social Skills scale on the Social Skills Rating System (SSRS)) and two broader functioning measures that include assessment of social functioning (the Social Skills scale on the Behavior Assessment System for Children-Second Edition (BASC-2), Social Interaction and Communication domain on the Scales of Independent Behavior-Revised (SIB-R)).
Results:
For children with NF1, the SSRS mean standard score was significantly lower than the BASC-2 and SIB-R (f=-5.11, p<.001; f=-4.63, p<.001) while there was no significant difference between the BASC-2 and SIB-R. No significant differences emerged between measures for unaffected siblings. No significant group differences in mean standard score were found for the SSRS, BASC-2 or SIB-R. Fisher’s exact tests revealed the NF1 group had significantly more frequent difficulties than unaffected siblings on the BASC-2 (p=.017) but not on the SSRS or SIB-R. For both groups, Cochran’s Q tests determined a significant difference in the proportion of identified social difficulties across measures (NF1: X2(2)=16.33, p<.001; Siblings: X2(2)=9.25, p=.01). Follow up McNemar’s tests demonstrated significantly more difficulties reported on the SSRS compared to the BASC-2 for both groups (NF1: p<.001; Siblings: p=.016). Significantly more frequent difficulties were also reported on the SSRS compared to the SIB-R for the NF1 group (p=.002) but not for the unaffected siblings group. No difference in the frequency of difficulties was evident between the BASC-2 and SIB-R for either group.
Conclusions:
Social skills difficulties appear to be best captured using the SSRS in young children, particularly for children with NF1 as this measure resulted in the lowest mean score and the greatest frequency of difficulties observed within the NF1 group. However, it is notable that group differences in comparison to unaffected siblings were not observed in mean score or frequency of difficulties, such that these young children with NF1 are not showing marked social challenges but rather, social difficulties may be mild when present at this age. Nevertheless, using a measure that specifically targets social functioning, rather than a measure where social functioning is merely a component of a broad measure, appears beneficial to capturing social difficulty. Using measures that best capture social difficulties will contribute to early identification and assessment of intervention effectiveness. Further work with additional age ranges and longitudinal trajectory is needed.
Increasing emphasis on the use of real-world evidence (RWE) to support clinical policy and regulatory decision-making has led to a proliferation of guidance, advice, and frameworks from regulatory agencies, academia, professional societies, and industry. A broad spectrum of studies use real-world data (RWD) to produce RWE, ranging from randomized trials with outcomes assessed using RWD to fully observational studies. Yet, many proposals for generating RWE lack sufficient detail, and many analyses of RWD suffer from implausible assumptions, other methodological flaws, or inappropriate interpretations. The Causal Roadmap is an explicit, itemized, iterative process that guides investigators to prespecify study design and analysis plans; it addresses a wide range of guidance within a single framework. By supporting the transparent evaluation of causal assumptions and facilitating objective comparisons of design and analysis choices based on prespecified criteria, the Roadmap can help investigators to evaluate the quality of evidence that a given study is likely to produce, specify a study to generate high-quality RWE, and communicate effectively with regulatory agencies and other stakeholders. This paper aims to disseminate and extend the Causal Roadmap framework for use by clinical and translational researchers; three companion papers demonstrate applications of the Causal Roadmap for specific use cases.
The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery (WCPCCS) will be held in Washington DC, USA, from Saturday, 26 August, 2023 to Friday, 1 September, 2023, inclusive. The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery will be the largest and most comprehensive scientific meeting dedicated to paediatric and congenital cardiac care ever held. At the time of the writing of this manuscript, The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery has 5,037 registered attendees (and rising) from 117 countries, a truly diverse and international faculty of over 925 individuals from 89 countries, over 2,000 individual abstracts and poster presenters from 101 countries, and a Best Abstract Competition featuring 153 oral abstracts from 34 countries. For information about the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery, please visit the following website: [www.WCPCCS2023.org]. The purpose of this manuscript is to review the activities related to global health and advocacy that will occur at the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery.
Acknowledging the need for urgent change, we wanted to take the opportunity to bring a common voice to the global community and issue the Washington DC WCPCCS Call to Action on Addressing the Global Burden of Pediatric and Congenital Heart Diseases. A copy of this Washington DC WCPCCS Call to Action is provided in the Appendix of this manuscript. This Washington DC WCPCCS Call to Action is an initiative aimed at increasing awareness of the global burden, promoting the development of sustainable care systems, and improving access to high quality and equitable healthcare for children with heart disease as well as adults with congenital heart disease worldwide.
We investigate the diversity in the sizes and average surface densities of the neutral atomic hydrogen (H i) gas discs in $\sim$280 nearby galaxies detected by the Widefield ASKAP L-band Legacy All-sky Blind Survey (WALLABY). We combine the uniformly observed, interferometric H i data from pilot observations of the Hydra cluster and NGC 4636 group fields with photometry measured from ultraviolet, optical, and near-infrared imaging surveys to investigate the interplay between stellar structure, star formation, and H i structural parameters. We quantify the H i structure by the size of the H i relative to the optical disc and the average H i surface density measured using effective and isodensity radii. For galaxies resolved by $>$$1.3$ beams, we find that galaxies with higher stellar masses and stellar surface densities tend to have less extended H i discs and lower H i surface densities: the isodensity H i structural parameters show a weak negative dependence on stellar mass and stellar mass surface density. These trends strengthen when we limit our sample to galaxies resolved by $>$2 beams. We find that galaxies with higher H i surface densities and more extended H i discs tend to be more star forming: the isodensity H i structural parameters have stronger correlations with star formation. Normalising the H i disc size by the optical effective radius (instead of the isophotal radius) produces positive correlations with stellar masses and stellar surface densities and removes the correlations with star formation. This is due to the effective and isodensity H i radii increasing with mass at similar rates while, in the optical, the effective radius increases slower than the isophotal radius. Our results are in qualitative agreement with previous studies and demonstrate that with WALLABY we can begin to bridge the gap between small galaxy samples with high spatial resolution H i data and large, statistical studies using spatially unresolved, single-dish data.
The quenching of cluster satellite galaxies is inextricably linked to the suppression of their cold interstellar medium (ISM) by environmental mechanisms. While the removal of neutral atomic hydrogen (H i) at large radii is well studied, how the environment impacts the remaining gas in the centres of galaxies, which are dominated by molecular gas, is less clear. Using new observations from the Virgo Environment traced in CO survey (VERTICO) and archival H i data, we study the H i and molecular gas within the optical discs of Virgo cluster galaxies on 1.2-kpc scales with spatially resolved scaling relations between stellar ($\Sigma_{\star}$), H i ($\Sigma_{\text{H}\,{\small\text{I}}}$), and molecular gas ($\Sigma_{\text{mol}}$) surface densities. Adopting H i deficiency as a measure of environmental impact, we find evidence that, in addition to removing the H i at large radii, the cluster processes also lower the average $\Sigma_{\text{H}\,{\small\text{I}}}$ of the remaining gas even in the central $1.2\,$kpc. The impact on molecular gas is comparatively weaker than on the H i, and we show that the lower $\Sigma_{\text{mol}}$ gas is removed first. In the most H i-deficient galaxies, however, we find evidence that environmental processes reduce the typical $\Sigma_{\text{mol}}$ of the remaining gas by nearly a factor of 3. We find no evidence for environment-driven elevation of $\Sigma_{\text{H}\,{\small\text{I}}}$ or $\Sigma_{\text{mol}}$ in H i-deficient galaxies. Using the ratio of $\Sigma_{\text{mol}}$-to-$\Sigma_{\text{H}\,{\small\text{I}}}$ in individual regions, we show that changes in the ISM physical conditions, estimated using the total gas surface density and midplane hydrostatic pressure, cannot explain the observed reduction in molecular gas content. Instead, we suggest that direct stripping of the molecular gas is required to explain our results.