We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Despite the increased awareness and action towards Equality, Diversity and Inclusion (EDI), the glaciological community still experiences and perpetuates examples of exclusionary and discriminatory behavior. We here discuss the challenges and visions from a group predominantly composed of early-career researchers from the 2023 edition of the Karthaus Summer School on Ice Sheets and Glaciers in the Climate System. This paper presents the results of an EDI-focused workshop that the 36 students and 12 lecturers who attended the summer school actively participated in. We identify common threads from participant responses and distill them into collective visions for the future of the glaciological research community, built on actionable steps toward change. In this paper, we address the following questions that guided the workshop: What do we see as current EDI challenges in the glaciology research community and which improvements would we like to see in the next fifty years? Contributions have been sorted into three main challenges we want and need to face: making glaciology (1) more accessible, (2) more equitable and (3) more responsible.
Objectives/Goals: We describe the prevalence of individuals with household exposure to SARS-CoV-2, who subsequently report symptoms consistent with COVID-19, while having PCR results persistently negative for SARS-CoV-2 (S[+]/P[-]). We assess whether paired serology can assist in identifying the true infection status of such individuals. Methods/Study Population: In a multicenter household transmission study, index patients with SARS-CoV-2 were identified and enrolled together with their household contacts within 1 week of index’s illness onset. For 10 consecutive days, enrolled individuals provided daily symptom diaries and nasal specimens for polymerase chain reaction (PCR). Contacts were categorized into 4 groups based on presence of symptoms (S[+/-]) and PCR positivity (P[+/-]). Acute and convalescent blood specimens from these individuals (30 days apart) were subjected to quantitative serologic analysis for SARS-CoV-2 anti-nucleocapsid, spike, and receptor-binding domain antibodies. The antibody change in S[+]/P[-] individuals was assessed by thresholds derived from receiver operating characteristic (ROC) analysis of S[+]/P[+] (infected) versusS[-]/P[-] (uninfected). Results/Anticipated Results: Among 1,433 contacts, 67% had ≥1 SARS-CoV-2 PCR[+] result, while 33% remained PCR[-]. Among the latter, 55% (n = 263) reported symptoms for at least 1 day, most commonly congestion (63%), fatigue (63%), headache (62%), cough (59%), and sore throat (50%). A history of both previous infection and vaccination was present in 37% of S[+]/P[-] individuals, 38% of S[-]/P[-], and 21% of S[+]/P[+] (P<0.05). Vaccination alone was present in 37%, 41%, and 52%, respectively. ROC analyses of paired serologic testing of S[+]/P[+] (n = 354) vs. S[-]/P[-] (n = 103) individuals found anti-nucleocapsid data had the highest area under the curve (0.87). Based on the 30-day antibody change, 6.9% of S[+]/P[-] individuals demonstrated an increased convalescent antibody signal, although a similar seroresponse in 7.8% of the S[-]/P[-] group was observed. Discussion/Significance of Impact: Reporting respiratory symptoms was common among household contacts with persistent PCR[-] results. Paired serology analyses found similar seroresponses between S[+]/P[-] and S[-]/P[-] individuals. The symptomatic-but-PCR-negative phenomenon, while frequent, is unlikely attributable to true SARS-CoV-2 infections that go missed by PCR.
Effective, evidence-based psychological therapies for prolonged grief reactions exist but are not routinely available in United Kingdom National Health Service (NHS) services. This audit evaluated the feasibility and clinical effectiveness of a high-intensity prolonged grief disorder therapy (PGDT) treatment pathway in an NHS Talking Therapies (NHS-TT) context for clients with a prolonged grief reaction alongside depression, anxiety and/or post-traumatic stress disorder. Seventeen experienced high-intensity therapists were trained to deliver PGDT. Ninety-one clients were treated between April 2022 and April 2024, 80 of whom met criteria and were included in this audit; 83% of clients completed at least four treatment sessions (a liberal estimate of minimum adequate dose), the mean number of sessions attended was 10.29 (SD=5.81) and rates of drop-out were low (16%). Data completeness rates were 100% depression, anxiety and functioning measures and 61% for the grief outcome (Brief Grief Questionnaire; BGQ). There was no evidence of treatment-related harms. There were statistically significant, large pre–post treatment effect size improvements across outcomes from intake to last treatment session (p<.001; Cohen’s d>1.05). According to NHS-TT outcome metrics for combined changes in anxiety and depression, 82% of clients exhibited reliable improvement, 72% showed recovery, and 68% of clients achieved reliable recovery. On the BGQ, rates of reliable improvement were 77% and rates of recovery were 63%. Effects held when focusing on the subgroup with more severe grief symptoms (intake BGQ≥8; n=40). These findings suggest it is feasible and probably effective to implement a PGDT pathway in an NHS-TT context.
Key learning aims
(1) To become familiar with prolonged grief disorder (PGD) as a diagnostic construct.
(2) To gain insight into using prolonged grief disorder therapy (PGDT) to treat PGD.
(3) To understand ways to train and supervise National Health Service Talking Therapies (NHS-TT) high-intensity therapists to implement PGDT.
(4) To evaluate the potential feasibility and effectiveness of implementing PGDT in an NHS-TT context.
Free sugar intakes are currently higher than recommended for health, yet effective strategies for reducing consumption are yet to be elucidated. This work investigated the effects of different dietary recommendations for reducing free sugar (FS) intakes, on relevant outcomes, in UK adults consuming > 5 % of total energy intake (TEI) from FS. Using a randomised controlled parallel-group design, 242 adults received nutrient-based (n 61), nutrient- and food-based (n 60), nutrient-, food- and food-substitution-based (n 63) or no (n 58) recommendations for reducing FS at a single timepoint, with effects assessed for the following 12 weeks. Primary outcomes were FS intakes as a percentage of TEI (%FS) and adherence to the recommendations at week 12. Secondary outcomes included TEI, diet composition, sugar-rich and low-calorie-sweetened food consumption and anthropometry. In intention-to-treat analyses adjusted for baseline measures, %FS reduced in intervention groups (%FSchange = –2·5 to −3·3 %) compared with control (%FSchange = –1·2 %) (smallest B = –0·573, P = 0·03), with effects from week 1 until week 12 and no differences between interventions (largest B = 0·352, P = 0·42). No effects of the interventions were found in dietary profiles, but change in %FS was associated with change in %TEI from non-sugar carbohydrate (B = 0·141, P < 0·01) and from protein (B = –0·171, P = 0·02). Body weight was also lower at week 12 in intervention groups compared with control (B = –0·377, P < 0·05), but associations with %FS were weak. Our findings demonstrate the benefit of dietary recommendations for reducing FS intakes in UK adults. Limited advantages were found for the different dietary recommendations, but variety may offer individual choice.
Dilated cardiomyopathy (DCM) is a leading cause of heart failure and the most common indication for a heart transplant. Guidelines are regularly based on studies of adults and applied to the young. Children and adolescents diagnosed with DCM face different lifestyle challenges from individuals diagnosed in adulthood that include medical trauma and are influenced by maturity levels and confidence with advocacy to adults.
Using a UK patient-scientist’s perspective, we reviewed the age-specific challenges faced by the young with DCM, evaluated current guidelines and evidence, and identified areas requiring further recommendations and research. We highlight the importance of (i) the transition clinic from paediatric to adult services, (ii) repeated signposting to mental health services, (iii) standardised guidance on physical activity, (iv) caution surrounding alcohol and smoking, (v) the dangers of illegal drugs, and (vi) reproductive options and health.
Further research is needed to address the many uncertainties in these areas with respect to young age, particularly for physical activity, and such guidance would be welcomed by the young with DCM who must come to terms with being different and more limited amongst healthy peers.
Polygenic scores (PGSs) have garnered increasing attention in the clinical sciences due to their robust prediction signals for psychopathology, including externalizing (EXT) behaviors. However, studies leveraging PGSs have rarely accounted for the phenotypic and developmental heterogeneity in EXT outcomes. We used the National Longitudinal Study of Adolescent to Adult Health (analytic N = 4,416), spanning ages 13 to 41, to examine associations between EXT PGSs and trajectories of antisocial behaviors (ASB) and substance use behaviors (SUB) identified via growth mixture modeling. Four trajectories of ASB were identified: High Decline (3.6% of the sample), Moderate (18.9%), Adolescence-Peaked (10.6%), and Low (67%), while three were identified for SUB: High Use (35.2%), Typical Use (41.7%), and Low Use (23%). EXT PGSs were consistently associated with persistent trajectories of ASB and SUB (High Decline and High Use, respectively), relative to comparison groups. EXT PGSs were also associated with the Low Use trajectory of SUB, relative to the comparison group. Results suggest PGSs may be sensitive to developmental typologies of EXT, where PGSs are more strongly predictive of chronicity in addition to (or possibly rather than) absolute severity.
North Carolina growers have long struggled to control Italian ryegrass, and recent research has confirmed that some Italian ryegrass biotypes have become resistant to nicosulfuron, glyphosate, clethodim, and paraquat. Integrating alternative management strategies is crucial to effectively control such biotypes. The objectives of this study were to evaluate Italian ryegrass control with cover crops and fall-applied residual herbicides and investigate cover crop injury from residual herbicides. This study was conducted during the fall/winter of 2021–22 in Salisbury, NC, and fall/winter of 2021–22 and 2022–23 in Clayton, NC. The study was designed as a 3 × 5 split-plot in which the main plot consisted of three cover crop treatments (no-cover, cereal rye at 80 kg ha−1, and crimson clover at 18 kg ha−1), and the subplots consisted of five residual herbicide treatments (S-metolachlor, flumioxazin, metribuzin, pyroxasulfone, and nontreated). In the 2021–22 season at Clayton, metribuzin injured cereal rye and crimson clover 65% and 55%, respectively. However, metribuzin injured both cover crops ≤6% in 2022–23. Flumioxazin resulted in unacceptable crimson clover injury of 50% and 38% in 2021–22 and 2022–23 in Clayton and 40% in Salisbury, respectively. Without preemergence herbicides, cereal rye controlled Italian ryegrass by 85% and 61% at 24 wk after planting in 2021–22 and 2022–23 in Clayton and 82% in Salisbury, respectively. In 2021–22, Italian ryegrass seed production was lowest in cereal rye plots at both locations, except when it was treated with metribuzin. For example, in Salisbury, cereal rye plus metribuzin resulted in 39,324 seeds m–2, compared to ≤4,386 seeds m–2 from all other cereal rye treatments. In 2022–23, Italian ryegrass seed production in cereal rye was lower when either metribuzin or pyroxasulfone were used preemergence (2,670 and 1,299 seeds m–2, respectively) compared with cereal rye that did not receive an herbicide treatment (5,600 seeds m–2). cereal rye (Secale cereale L.) and crimson clover (Trifolium incarnatum L.)
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
In North America, less than 30% of children with complex CHD receive recommended follow-up for neurodevelopmental and psychosocial care. While rates of follow-up care at surgical centres have been described, little is known about similar services outside of surgical centres.
Methods:
This cohort study used Maine Health Data Organization’s All Payer Claims Data from 2015 to 2019 to identify developmental and psychosocial-related encounters received by children 0–18 years of age with complex CHD. Encounters were classified as developmental, psychological, and neuropsychological testing, mental health assessment interventions, and health and behaviour assessments and interventions. We analysed the association of demographic and clinical characteristics of children and the receipt of any encounter.
Results:
Of 799 unique children with complex CHD (57% male, 56% Medicaid, and 64% rural), 185 (23%) had at least one developmental or psychosocial encounter. Only 13 children (1.6%) received such care at a surgical centre. Developmental testing took place at a mix of community clinics/private practices (39%), state-based programmes (31%), and hospital-affiliated clinics (28%) with most encounters billing Medicaid (86%). Health and behavioural assessments occurred exclusively at hospital-affiliated clinics, predominately with Medicaid claims (82%). Encounters for mental health interventions, however, occurred in mostly community clinics/private practices (80%) with the majority of encounters billing commercial insurance (64%).
Conclusion:
Children with complex CHD in Maine access developmental and psychosocial services in locations beyond surgical centres. To better support the neurodevelopmental outcomes of their patients, CHD centres should build partnerships with these external providers.
To introduce the Emory 10-element Complex Figure (CF) scoring system and recognition task. We evaluated the relationship between Emory CF scoring and traditional Osterrieth CF scoring approach in cognitively healthy volunteers. Additionally, a cohort of patients undergoing deep brain stimulation (DBS) evaluation was assessed to compare the scoring methods in a clinical population.
Method:
The study included 315 volunteers from the Emory Healthy Brain Study (EHBS) with Montreal Cognitive Assessment (MoCA) scores of 24/30 or higher. The clinical group consisted of 84 DBS candidates. Scoring time differences were analyzed in a subset of 48 DBS candidates.
Results:
High correlations between scoring methods were present for non-recognition components in both cohorts (EHBS: Copy r = 0.76, Immediate r = 0.86, Delayed r = 0.85, Recognition r = 47; DBS: Copy r = 0.80, Immediate r = 0.84, Delayed Recall r = 0.85, Recognition r = 0.37). Emory CF scoring times were significantly shorter than Osterrieth times across non-recognition conditions (all p < 0.00001, individual Cohen’s d: 1.4–2.4), resulting in an average time savings of 57%. DBS patients scored lower than EHBS participants across CF memory measures, with larger effect sizes for Emory CF scoring (Cohen’s d range = 1.0–1.2). Emory CF scoring demonstrated better group classification in logistic regression models, improving DBS candidate classification from 16.7% to 32.1% compared to Osterrieth scoring.
Conclusions:
Emory CF scoring yields results that are highly correlated with traditional Osterrieth scoring, significantly reduces scoring time burden, and demonstrates greater sensitivity to memory decline in DBS candidates. Its efficiency and sensitivity make Emory CF scoring well-suited for broader implementation in clinical research.
There are numerous challenges pertaining to epilepsy care across Ontario, including Epilepsy Monitoring Unit (EMU) bed pressures, surgical access and community supports. We sampled the current clinical, community and operational state of Ontario epilepsy centres and community epilepsy agencies post COVID-19 pandemic. A 44-item survey was distributed to all 11 district and regional adult and paediatric Ontario epilepsy centres. Qualitative responses were collected from community epilepsy agencies. Results revealed ongoing gaps in epilepsy care across Ontario, with EMU bed pressures and labour shortages being limiting factors. A clinical network advising the Ontario Ministry of Health will improve access to epilepsy care.
Scholars have long recognized that interpersonal networks play a role in mobilizing social movements. Yet, many questions remain. This Element addresses these questions by theorizing about three dimensions of ties: emotionally strong or weak, movement insider or outsider, and ingroup or cross-cleavage. The survey data on the 2020 Black Lives Matter protests show that weak and cross-cleavage ties among outsiders enabled the movement to evolve from a small provocation into a massive national mobilization. In particular, the authors find that Black people mobilized one another through social media and spurred their non-Black friends to protest by sharing their personal encounters with racism. These results depart from the established literature regarding the civil rights movement that emphasizes strong, movement-internal, and racially homogenous ties. The networks that mobilize appear to have changed in the social media era. This title is also available as Open Access on Cambridge Core.
An accurate accounting of prior sport-related concussion (SRC) is critical to optimizing the clinical care of athletes with SRC. Yet, obtaining such a history via medical records or lifetime monitoring is often not feasible necessitating the use of self-report histories. The primary objective of the current project is to determine the degree to which athletes consistently report their SRC history on serial assessments throughout their collegiate athletic career.
Participants and Methods:
Data were obtained from the NCAA-DoD CARE Consortium and included 1621 athletes (914 male) from a single Division 1 university who participated in athletics during the 2014-2017 academic years. From this initial cohort, 752 athletes completed a second-year assessment and 332 completed a third-year assessment. Yearly assessments included a brief self-report survey that queried SRC history of the previous year. Consistency of self-reported SRC history was defined as reporting the same number of SRC on subsequent yearly evaluation as had been reported the previous year.
For every year of participation, the number of SRC reported on the baseline exam (Reported) and the number of SRC recorded by athletes and medical staff during the ensuing season (Recorded) were tabulated. In a subsequent year, the expected number of SRC (Expected) was computed as the sum of Reported and Recorded. For participation years in which Expected could be computed, the reporting deviation (RepDev) gives the difference between the number of SRCs which were expected to be reported at a baseline exam based on previous participation year data and the number of SRCs which was actually reported by the athlete or medical record during the baseline exam. The reporting deviation was computed only for those SRC that occurred while the participant was enrolled in the current study (RepDevSO). Oneway intraclass correlations (ICC) were computed between the expected and reported numbers of SRC.
Results:
341 athletes had a history of at least one SRC and 206 of those (60.4%) had a RepDev of 0. The overall ICC for RepDev was 0.761 (95% CI 0.73-0.79). The presence of depression (ICC 0.87, 95% CI 0.79-0.92) and loss of consciousness (ICC 0.80, 95% CI 0.720.86) were associated with higher ICCs compared to athletes without these variables. Female athletes demonstrated higher self-report consistency (ICC 0.82, 95% CI 0.79-0.85) compared to male athletes (ICC 0.72, 95% CI 0.68-0.76). Differences in the classification of RepDev according to sex and sport were found to be significant (x2=77.6, df=56, p=0.03). The sports with the highest consistency were Women’s Tennis, Men’s Diving, and Men’s Tennis with 100% consistency between academic years. Sports with the lowest consistency were Women’s Gymnastics (69%), Men’s Lacrosse (70%), and Football (72%). 96 athletes had at least one study-only SRC in the previous year and 69 of those (71.9%) had a RepDevSO of 0 (ICC 0.673, 95% CI 0.64-0.71).
Conclusions:
Approximately 40% of athletes do not consistently report their SRC history, potentially further complicating the clinical management of SRC. These findings encourage clinicians to be aware of factors which could influence the reliability of self-reported SRC history.
The Rey Complex Figure (CF) is a popular test to assess visuospatial construction and visual memory, but its broader use in clinical research is limited by scoring complexity. To widen its application, we developed a new CF scoring system similar to the Benson Figure in which 10 primary CF elements are scored according to presence and location. A novel recognition task was also created for each of these 10 items consisting of a 4-choice recognition condition containing the primary rectangle and major interior lines with qualitative variations of target elements as distractors. The current investigation was designed to characterize the relationship between scoring methods and establish whether comparable results are obtained across both traditional and new CF scoring approaches.
Participants and Methods:
Participants from the Emory Health Brain Study (EHBS) who had completed the Rey CF copy during their cognitive study visit were studied. All participants were self-identified as normal, and administered the CF according to our previously published procedure that included the Copy, Immediate Recall (∼ 30 seconds), and 30-minute Delayed Recall (Loring et al., 1990). Following delayed recall, CF recognition was assessed using the Meyers and Myers (1995) recognition followed by the newly developed forced choice recognition. The final sample included 155 participants ranging in age from 51.6 years to 80.0 years (M=64.9, SD=6.6). The average MoCA score was 26.8/30 (SD=6.6).
Results:
Mean performance levels across conditions and scoring approaches are included in the table. Correlations between Copy, Immediate Recall, Delayed Recall, and Recognition were calculated to evaluate the relationship between the traditional 18 item/36 point Osterrieth criteria and newly developed CF scoring criteria using both parametric and non-parametric approaches. Pearson correlations demonstrated high agreement between approaches when characterizing performance levels across all CF conditions (Copy r=.72, Immediate Recall r=.87, Delayed Recall r=.90, and Recognition r=.52). Similar correlations were present using non-parametric analyses (Copy ρ=.46, Immediate Recall ρ=.83, Delayed Recall ρ=.91, and Recognition ρ=.42). Table. Mean performance levels across conditions and scoring approaches
Conclusions:
The high correlations, particularly for Immediate and Delayed Recall conditions, suggest that the modified simpler scoring system is comparable to the traditional approach, thereby suggesting potential equivalence between scoring methods. When comparing Rey’s original 47 point scoring approach to his 36 point scoring system, Osterrieth (1944) reported a correlation in fifty adults of ρ=.95 and a correlation in twenty 6-year-olds of ρ=.92. In this investigation, lower correlations were observed for copy and recognition conditions, in part representing smaller response distribution across participants. Although these preliminary results are encouraging, to implement the new EHBS scoring method in clinical evaluation, we are developing normative data in participants across the entire EHBS series, many of whom were not administered the new CF Recognition. We are also examining performances in patients undergoing DBS evaluation for Parkinson Disease to explore its clinical sensitivity. Simpler scoring will permit greater CF clinical and research application.
The COVID-19 pandemic laid bare systemic inequities shaped by social determinants of health (SDoH). Public health agencies, legislators, health systems, and community organizations took notice, and there is currently unprecedented interest in identifying and implementing programs to address SDoH. This special issue focuses on the role of medical-legal partnerships (MLPs) in addressing SDoH and racial and social inequities, as well as the need to support these efforts with evidence-based research, data, and meaningful partnerships and funding.
State Medical Boards (SMBs) can take severe disciplinary actions (e.g., license revocation or suspension) against physicians who commit egregious wrongdoing in order to protect the public. However, there is noteworthy variability in the extent to which SMBs impose severe disciplinary action. In this manuscript, we present and synthesize a subset of 11 recommendations based on findings from our team’s larger consensus-building project that identified a list of 56 policies and legal provisions SMBs can use to better protect patients from egregious wrongdoing by physicians.
OBJECTIVES/GOALS: The United States is experiencing an epidemic of firearm deaths and injuries. Poverty and other socioeconomic factors have been linked to firearm injuries on the national level. The goal of this study is to examine the relationship between county level poverty and firearm injuries in the State of Maryland. METHODS/STUDY POPULATION: This is a cross sectional study assessing fatal and non-fatal firearm injuries of all ages between 2018-2020 utilizing data from the State of Maryland’s Health Services Cost Review Commission. Our primary analysis will involve calculating injury and mortality rates to assess if fatal and non-fatal firearm injuries are associated with county-level poverty, defined as the percentage of the population living below the federal poverty line. Rates will be calculated by determining county level population within subgroups using the National Historical Geographical Information System database. We will also conduct regression analyses to adjust for confounding variables selected based on evidence from prior research. Some of these variables include age, sex, race, urbanicity, and the social vulnerability index. RESULTS/ANTICIPATED RESULTS: An abundance of prior research has demonstrated differences in firearm injury by age, sex, and race. Prior studies have also shown that poverty is associated with higher rates of firearm-related deaths among youth. Based on that foundational data, we anticipate that regression analyses will demonstrate that counties with higher poverty levels will have higher rates of fatal and non-fatal firearm injuries, even after controlling for other known risk factors. DISCUSSION/SIGNIFICANCE: Findings from this study will contribute to growing evidence on the role of poverty in the burden of firearm injuries and mortality. This will have policy implications regarding the allocation of public health resources and interventions aimed at reducing firearm-related injuries and deaths in Maryland.
OBJECTIVES/GOALS: Our long-term goal is to understand how both genetic and environmental (GxE) factors contribute to neurodevelopmental disorders (NDDs) so that we may potentially intervene in disease pathogenesis and design therapies to address functional deficiencies. METHODS/STUDY POPULATION: Our studies use a novel GxE model to determine how cephalosporin antibiotic exposure alters the gut microbiome, hippocampal neurogenesis, and behavior in the genetically vulnerable 16p11.2 microdeletion (16pDel) mouse. This mouse models one of the most frequently observed genetic risk variants implicated in NDDs, including ~1% of autism diagnoses. Wildtype and 16pDel littermates were exposed to saline or the cephalosporin, cefdinir, from postnatal days 5-9. We quantified changes in gut microbiota composition using 16S rRNA gene sequencing and utilized immunoblotting, immunohistochemistry, and bulk RNA gene sequencing to assess changes in hippocampal neurogenesis. An additional cohort of saline or cefdinir-exposed mice were subjected to a behavioral battery to assess changes in sociability and anxiety. RESULTS/ANTICIPATED RESULTS: We leveraged the next-generation microbiome bioinformatics platform, Quantitative Insights Into Microbial Ecology 2 (QIIME2) to analyze 16S rRNA gene sequencing datasets of P13 cecal samples from saline- and cefdinir-exposed mice. We found successful perturbations to the gut microbiome following early life cefdinir exposure. Further, we found a robust 50% reduction in hippocampal cyclin E protein in cefdinir-exposed 16pDel male mice, which was replicated in a second independent experiment. This reduction extended to the S-phase cell entry and general stem cell population, quantified by EdU+ and Ki67+ cell numbers, respectively. Lastly, in our first cohort of mice for behavioral studies, we found reduced sociability and increased anxiety-like behaviors in cefdinir-exposed mice. DISCUSSION/SIGNIFICANCE: The findings from this GxE model will provide mechanistic insights into the causes of NDDs; they may inform practice guidelines so as to reduce this environmental exposure; and may suggest interventions like probiotics for those at risk in order to overcome altered gut microbiome composition and restore hippocampal neurogenesis defects.