We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
Objectives/Goals: Team science (TS) competency is important for translational science team collaboration. However, there are few educators available to assist teams. Asynchronous learning is an effective strategy for delivering TS content. The goal of this project is to expand TS education by providing online access to our learners using online modules. Methods/Study Population: The Collaboration and Team Science (CaTS) team at the University of Cincinnati provides a robust TS education and training program. As the need for team science gains recognition, CaTS has received increased requests for services, leading to a need to broaden TS offerings. To address this demand, the CaTS team created “Team Science 101,” an online, asynchronous, series of 15 modules covering basic team science concepts. Each module consists of an educational recording lasting an average of 20 minutes, optional topic resources, pre- and post-module surveys assessing learners’ confidence and satisfaction, post-module knowledge checks, and evaluation questions. Upon completing all modules, participants receive a completion certificate. Results/Anticipated Results: TS 101 will be piloted with a group of participants who expressed interest in asynchronous TS content and will be adjusted based on the feedback received. The associated pre- and post-module survey, post-module knowledge check, and evaluation questions will be monitored to determine learning levels and improve TS 101 overall. Canvas is the educational platform that houses these modules, allowing for participant follow-up and scalable dissemination. The CaTS team plans to disseminate TS 101 nationally and internationally for anyone interested in this resource. Discussion/Significance of Impact: There is a national effort to collect and curate TS education, training, and toolkits. TS 101 will be a useful educational tool that will expand the reach of team science educators, provide the foundation for educators to explore topics more deeply by building on the module topics, and provide education to broader audiences who lack access to TS experts.
Being married may protect late-life cognition. Less is known about living arrangement among unmarried adults and mechanisms such as brain health (BH) and cognitive reserve (CR) across race and ethnicity or sex/gender. The current study examines (1) associations between marital status, BH, and CR among diverse older adults and (2) whether one’s living arrangement is linked to BH and CR among unmarried adults.
Method:
Cross-sectional data come from the Washington Heights-Inwood Columbia Aging Project (N = 778, 41% Hispanic, 33% non-Hispanic Black, 25% non-Hispanic White; 64% women). Magnetic resonance imaging (MRI) markers of BH included cortical thickness in Alzheimer’s disease signature regions and hippocampal, gray matter, and white matter hyperintensity volumes. CR was residual variance in an episodic memory composite after partialing out MRI markers. Exploratory analyses stratified by race and ethnicity and sex/gender and included potential mediators.
Results:
Marital status was associated with CR, but not BH. Compared to married individuals, those who were previously married (i.e., divorced, widowed, and separated) had lower CR than their married counterparts in the full sample, among White and Hispanic subgroups, and among women. Never married women also had lower CR than married women. These findings were independent of age, education, physical health, and household income. Among never married individuals, living with others was negatively linked to BH.
Conclusions:
Marriage may protect late-life cognition via CR. Findings also highlight differential effects across race and ethnicity and sex/gender. Marital status could be considered when assessing the risk of cognitive impairment during routine screenings.
The objective of the study was to compare the potential dietary impact of proposed and final front-of-pack labelling (FOPL) regulations (published in Canada Gazette I (CG1) and Canada Gazette II (CG2), respectively) by examining the difference in the prevalence of foods that would require a ‘High in’ front-of-pack nutrition symbol and nutrient intakes from those foods consumed by Canadian adults.
Design:
Foods in a generic food composition database (n 3676) were categorised according to the details of FOPL regulations in CGI and CGII, and the differences in the proportion of foods were compared. Using nationally representative dietary survey data, potential intakes of nutrients from foods that would display a ‘High in’ nutrition symbol according to CGI and CGII were compared.
Setting:
Canada
Participants:
Canadian adults (≥ 19 years; n 13 495)
Results:
Compared with CGI, less foods would display a ‘High in’ nutrition symbol (Δ = –6 %) according to CGII (saturated fat = –4 %, sugars = –1 %, sodium = –3 %). Similarly, potential intakes of nutrients-of-concern from foods that would display a ‘High in’ nutrition symbol were reduced according to CGII compared with CGI (saturated fat = –21 %, sugars = –2 %, sodium = –6 %). Potential intakes from foods that would display a ‘High in’ nutrition symbol were also reduced for energy and nutrients-to-encourage, including protein, fibre, calcium and vitamin D.
Conclusions:
Changes to FOPL regulations may have blunted their potential to limit intakes of nutrients-of-concern; however, they likely averted potential unintended consequences on intakes of nutrients-to-encourage for Canadians (e.g. calcium and vitamin D). To ensure policy objectives are met, FOPL regulations must be monitored regularly and evaluated over time.
In the United States, all 50 states and the District of Columbia have Good Samaritan Laws (GSLs). Designed to encourage bystanders to aid at the scene of an emergency, GSLs generally limit the risk of civil tort liability if the care is rendered in good faith. Nation-wide, a leading cause of preventable death is uncontrolled external hemorrhage. Public bleeding control initiatives aim to train the public to recognize life-threatening external bleeding, perform life-sustaining interventions (including direct pressure, tourniquet application, and wound packing), and to promote access to bleeding control equipment to ensure a rapid response from bystanders.
Methods:
This study sought to identify the GSLs in each state and the District of Columbia to identify what type of responder is covered by the law (eg, all laypersons, only trained individuals, or only licensed health care providers) and if bleeding control is explicitly included or excluded in their Good Samaritan coverage.
Results:
Good Samaritan Laws providing civil liability qualified immunity were identified in all 50 states and the District of Columbia. One state, Oklahoma, specifically includes bleeding control in its GSLs. Six states – Connecticut, Illinois, Kansas, Kentucky, Michigan, and Missouri – have laws that define those covered under Good Samaritan immunity, generally limiting protection to individuals trained in a standard first aid or resuscitation course or health care clinicians. No state explicitly excludes bleeding control from their GSLs, and one state expressly includes it.
Conclusion:
Nation-wide across the United States, most states have broad bystander coverage within GSLs for emergency medical conditions of all types, including bleeding emergencies, and no state explicitly excludes bleeding control interventions. Some states restrict coverage to those health care personnel or bystanders who have completed a specific training program. Opportunity exists for additional research into those states whose GSLs may not be inclusive of bleeding control interventions.
Individuals often use self-directed strategies to manage intake of tempting foods, but what these strategies are and whether they are effective is not well understood. This study assessed the frequency of use and subjective effectiveness of self-directed strategies in relation to BMI and snack intake.
Design:
A cross-sectional and prospective study with three time points (T1: baseline, T2: 3 months and T3: 3 years). At T1, demographics, frequency of use and subjective effectiveness of forty-one identified strategies were assessed. At T2 and T3, current weight was reported, and at T2 frequency of snack intake was also recorded.
Setting:
Online study in the UK.
Participants:
Data from 368 participants (Mage = 34·41 years; MBMI = 25·06 kg/m2) were used for analysis at T1, n = 170 (46·20 % of the total sample) at T2 and n = 51 (13·59 %) at T3.
Results:
Two strategy factors were identified via principal axis factoring: (1) diet, exercise, reduction of temptations, and cognitive strategies, and (2) planning, preparation and eating style. For strategy 1, frequency of use, but not subjective effectiveness, was positively related to BMI at T1. Subjective effectiveness predicted an increase in BMI from T1 and T2 to T3. No relationship to snack intake was found. For strategy 2, frequency of use was negatively related to BMI at T1. Neither frequency of use nor subjective effectiveness were related to changes in BMI over time, but subjective effectiveness was negatively correlated with unhealthy snack intake.
Conclusion:
Self-directed strategies to reduce the intake of tempting foods are not consistently related to BMI or snack intake.
The AD8 is a validated screening instrument for functional changes that may be caused by cognitive decline and dementia. It is frequently used in clinics and research studies because it is short and easy to administer, with a cut off score of 2 out of 8 items recommended to maximize sensitivity and specificity. This cutoff assumes that all 8 items provide equivalent “information” about everyday functioning. In this study, we used item response theory (IRT) to test this assumption. To determine the relevance of this measure of everyday functioning in men and women, and across race, ethnicity, and education, we conducted differential item functioning (DIF) analysis to test for item bias.
Participants and Methods:
Data came from the 2021 follow up of the High School & Beyond cohort (N=8,690; mean age 57.5 ± 1.2; 55% women), a nationally representative, longitudinal study of Americans who were first surveyed in 1980 when they were in the 10th or 12th grade. Participants were asked AD8 questions about their own functioning via phone or internet survey. First, we estimated a one-parameter (i.e., differing difficulty, equal discrimination across items) and two-parameter IRT model (i.e., differing difficulty and differing discrimination across items). We compared model fit using a likelihood-ratio test. Second, we tested for uniform and non-uniform DIF on AD8 items by sex, race and ethnicity (non-Hispanic White, non-Hispanic Black, Hispanic), education level (high school or less, some college, BA degree or more), and survey mode (phone or internet). We examined DIF salience by comparing the difference between original and DIF-adjusted AD8 scores to the standard error of measurement of the original score.
Results:
The two-parameter IRT model fit the data significantly better than the one-parameter model, indicating that some items were more strongly related to underlying everyday functional ability than others. For example, the “problems with judgment” item had higher discrimination (more information) than the “less interest in hobbies/activities” item. There were significant differences in item endorsement by race/ethnicity, education, and survey mode. We found significant uniform and non-uniform DIF on several items across each of these groups. For example, for a given level of functional decline (theta) White participants were more likely to endorse “Daily problems with thinking/memory” than Black and Hispanic participants. The DIF was salient (i.e., caused AD8 scores to change by greater than the standard error of measurement for a large portion of respondents) for those with a college degree and phone respondents.
Conclusions:
In a population representative sample of Americans ∼age 57, the items on the AD8 contributed differing levels of discrimination along the range of everyday functioning that is impacted by later life cognitive impairment. This suggests that a simple cut-off or summed score may not be appropriate since some items yield more information about the underlying construct than others. Furthermore, we observed significant and salient DIF on several items by education and survey mode, AD8 scores should not be compared across education groups and assessment modes without adjustment for this measurement bias.
Anterior temporal lobectomy is a common surgical approach for medication-resistant temporal lobe epilepsy (TLE). Prior studies have shown inconsistent findings regarding the utility of presurgical intracarotid sodium amobarbital testing (IAT; also known as Wada test) and neuroimaging in predicting postoperative seizure control. In the present study, we evaluated the predictive utility of IAT, as well as structural magnetic resonance imaging (MRI) and positron emission tomography (PET), on long-term (3-years) seizure outcome following surgery for TLE.
Participants and Methods:
Patients consisted of 107 adults (mean age=38.6, SD=12.2; mean education=13.3 years, SD=2.0; female=47.7%; White=100%) with TLE (mean epilepsy duration =23.0 years, SD=15.7; left TLE surgery=50.5%). We examined whether demographic, clinical (side of resection, resection type [selective vs. non-selective], hemisphere of language dominance, epilepsy duration), and presurgical studies (normal vs. abnormal MRI, normal vs. abnormal PET, correctly lateralizing vs. incorrectly lateralizing IAT) were associated with absolute (cross-sectional) seizure outcome (i.e., freedom vs. recurrence) with a series of chi-squared and t-tests. Additionally, we determined whether presurgical evaluations predicted time to seizure recurrence (longitudinal outcome) over a three-year period with univariate Cox regression models, and we compared survival curves with Mantel-Cox (log rank) tests.
Results:
Demographic and clinical variables (including type [selective vs. whole lobectomy] and side of resection) were not associated with seizure outcome. No associations were found among the presurgical variables. Presurgical MRI was not associated with cross-sectional (OR=1.5, p=.557, 95% CI=0.4-5.7) or longitudinal (HR=1.2, p=.641, 95% CI=0.4-3.9) seizure outcome. Normal PET scan (OR= 4.8, p=.045, 95% CI=1.0-24.3) and IAT incorrectly lateralizing to seizure focus (OR=3.9, p=.018, 95% CI=1.2-12.9) were associated with higher odds of seizure recurrence. Furthermore, normal PET scan (HR=3.6, p=.028, 95% CI =1.0-13.5) and incorrectly lateralized IAT (HR= 2.8, p=.012, 95% CI=1.2-7.0) were presurgical predictors of earlier seizure recurrence within three years of TLE surgery. Log rank tests indicated that survival functions were significantly different between patients with normal vs. abnormal PET and incorrectly vs. correctly lateralizing IAT such that these had seizure relapse five and seven months earlier on average (respectively).
Conclusions:
Presurgical normal PET scan and incorrectly lateralizing IAT were associated with increased risk of post-surgical seizure recurrence and shorter time-to-seizure relapse.
This volume, part of the Feminist Judgment Series, shows how feminist legal theory along with critical race theory and intersectional modes of critique might transform immigration law. Here, a diverse collection of scholars and lawyers bring critical feminist, race and intersectional insights to Supreme Court opinions that deal with the source of the power to regulate immigration, state and local regulation of immigration, citizenship law, racial discrimination, employment law, access to public education, the rights of criminal defendants, the detention of noncitizens, and more. Feminist reasoning values the perspectives of outsiders, exposes the deep-rooted bias in the legal opinions of courts, and illuminates the effects of ostensibly neutral policies that create and maintain oppression and hierarchy. One by one, the chapters in this book reimagine the norms that drive immigration policies and practices. In place of discrimination and subordination, the authors here demand welcome and equality. Where current law omits the voice and stories of noncitizens, the authors here center their lives and experiences. Collectively, they reveal how a feminist vision of immigration law could center a commitment to equality and justice and foster a country where diverse newcomers readily flourish with dignity.
Children with fragile X syndrome (FXS) often avoid eye contact, a behavior that is potentially related to hyperarousal. Prior studies, however, have focused on between-person associations rather than coupling of within-person changes in gaze behaviors and arousal. In addition, there is debate about whether prompts to maintain eye contact are beneficial for individuals with FXS. In a study of young females (ages 6–16), we used eye tracking to assess gaze behavior and pupil dilation during social interactions in a group with FXS (n = 32) and a developmentally similar comparison group (n = 23). Participants engaged in semi-structured conversations with a female examiner during blocks with and without verbal prompts to maintain eye contact. We identified a social–behavioral and psychophysiological profile that is specific to females with FXS; this group exhibited lower mean levels of eye contact, significantly increased mean pupil dilation during conversations that included prompts to maintain eye contact, and showed stronger positive coupling between eye contact and pupil dilation. Our findings strengthen support for the perspective that gaze aversion in FXS reflects negative reinforcement of social avoidance behavior. We also found that behavioral skills training may improve eye contact, but maintaining eye contact appears to be physiologically taxing for females with FXS.
To evaluate efficiency and impact of a novel antimicrobial stewardship program (ASP) prospective-audit-with-feedback (PAF) review process using the Cerner Multi-Patient Task List (MPTL).
Design:
Retrospective cohort study.
Setting:
A 367-bed free-standing, pediatric academic medical center.
Methods:
The ASP PAF review process expanded to monitor all systemic and inhaled antibiotics through use of the MPTL on July 23, 2020. Average number of daily ASP reviews, absolute number of monthly interventions, and time to conduct ASP reviews were compared between the preimplementation period and the postimplementation period following expansion. Antibiotic days of therapy (DOT) per 1,000 patient days for overall and select antibiotics were compared between periods. ASP intervention characteristics were assessed.
Results:
Average daily ASP reviews significantly increased following program expansion (9 vs 14 reviews; P < .0001), and the absolute number of ASP interventions each month also increased (34 vs 52 interventions; P ≤ .0001). Time to conduct daily ASP reviews increased in the postimplementation period (1.03 vs 1.32 hours). Overall antibiotic DOT per 1,000 patient days significantly decreased in the postimplementation period (457.9 vs 427.9; P < .0001) as well as utilization of select, narrow-spectrum antibiotics such as ampicillin and clindamycin. Intervention type and antibiotics were similar between periods. The ASP documented 128 “nonantibiotic interventions” in the postimplementation period, including culture and/or susceptibility testing (32.8%), immunizations (25.8%), and additional diagnostic testing (22.7%).
Conclusions:
Implementation of an ASP PAF review process using the MPTL allowed for efficient expansion of a pre-existing ASP and a decrease in overall antibiotic utilization. ASP documentation was enhanced to fully track the impact of the program.
To describe the genomic analysis and epidemiologic response related to a slow and prolonged methicillin-resistant Staphylococcus aureus (MRSA) outbreak.
Design:
Prospective observational study.
Setting:
Neonatal intensive care unit (NICU).
Methods:
We conducted an epidemiologic investigation of a NICU MRSA outbreak involving serial baby and staff screening to identify opportunities for decolonization. Whole-genome sequencing was performed on MRSA isolates.
Results:
A NICU with excellent hand hygiene compliance and longstanding minimal healthcare-associated infections experienced an MRSA outbreak involving 15 babies and 6 healthcare personnel (HCP). In total, 12 cases occurred slowly over a 1-year period (mean, 30.7 days apart) followed by 3 additional cases 7 months later. Multiple progressive infection prevention interventions were implemented, including contact precautions and cohorting of MRSA-positive babies, hand hygiene observers, enhanced environmental cleaning, screening of babies and staff, and decolonization of carriers. Only decolonization of HCP found to be persistent carriers of MRSA was successful in stopping transmission and ending the outbreak. Genomic analyses identified bidirectional transmission between babies and HCP during the outbreak.
Conclusions:
In comparison to fast outbreaks, outbreaks that are “slow and sustained” may be more common to units with strong existing infection prevention practices such that a series of breaches have to align to result in a case. We identified a slow outbreak that persisted among staff and babies and was only stopped by identifying and decolonizing persistent MRSA carriage among staff. A repeated decolonization regimen was successful in allowing previously persistent carriers to safely continue work duties.
People with CHD are at increased risk for executive functioning deficits. Meta-analyses of these measures in CHD patients compared to healthy controls have not been reported.
Objective:
To examine differences in executive functions in individuals with CHD compared to healthy controls.
Data sources:
We performed a systematic review of publications from 1 January, 1986 to 15 June, 2020 indexed in PubMed, CINAHL, EMBASE, PsycInfo, Web of Science, and the Cochrane Library.
Study selection:
Inclusion criteria were (1) studies containing at least one executive function measure; (2) participants were over the age of three.
Data extraction:
Data extraction and quality assessment were performed independently by two authors. We used a shifting unit-of-analysis approach and pooled data using a random effects model.
Results:
The search yielded 61,217 results. Twenty-eight studies met criteria. A total of 7789 people with CHD were compared with 8187 healthy controls. We found the following standardised mean differences: −0.628 (−0.726, −0.531) for cognitive flexibility and set shifting, −0.469 (−0.606, −0.333) for inhibition, −0.369 (−0.466, −0.273) for working memory, −0.334 (−0.546, −0.121) for planning/problem solving, −0.361 (−0.576, −0.147) for summary measures, and −0.444 (−0.614, −0.274) for reporter-based measures (p < 0.001).
Limitations:
Our analysis consisted of cross-sectional and observational studies. We could not quantify the effect of collinearity.
Conclusions:
Individuals with CHD appear to have at least moderate deficits in executive functions. Given the growing population of people with CHD, more attention should be devoted to identifying executive dysfunction in this vulnerable group.
This is the first report on the association between trauma exposure and depression from the Advancing Understanding of RecOvery afteR traumA(AURORA) multisite longitudinal study of adverse post-traumatic neuropsychiatric sequelae (APNS) among participants seeking emergency department (ED) treatment in the aftermath of a traumatic life experience.
Methods
We focus on participants presenting at EDs after a motor vehicle collision (MVC), which characterizes most AURORA participants, and examine associations of participant socio-demographics and MVC characteristics with 8-week depression as mediated through peritraumatic symptoms and 2-week depression.
Results
Eight-week depression prevalence was relatively high (27.8%) and associated with several MVC characteristics (being passenger v. driver; injuries to other people). Peritraumatic distress was associated with 2-week but not 8-week depression. Most of these associations held when controlling for peritraumatic symptoms and, to a lesser degree, depressive symptoms at 2-weeks post-trauma.
Conclusions
These observations, coupled with substantial variation in the relative strength of the mediating pathways across predictors, raises the possibility of diverse and potentially complex underlying biological and psychological processes that remain to be elucidated in more in-depth analyses of the rich and evolving AURORA database to find new targets for intervention and new tools for risk-based stratification following trauma exposure.
Critical shortages of personal protective equipment, especially N95 respirators, during the coronavirus disease 2019 (COVID-19) pandemic continues to be a source of concern. Novel methods of N95 filtering face-piece respirator decontamination that can be scaled-up for in-hospital use can help address this concern and keep healthcare workers (HCWs) safe.
Methods:
A multidisciplinary pragmatic study was conducted to evaluate the use of an ultrasonic room high-level disinfection system (HLDS) that generates aerosolized peracetic acid (PAA) and hydrogen peroxide for decontamination of large numbers of N95 respirators. A cycle duration that consistently achieved disinfection of N95 respirators (defined as ≥6 log10 reductions in bacteriophage MS2 and Geobacillus stearothermophilus spores inoculated onto respirators) was identified. The treated masks were assessed for changes to their hydrophobicity, material structure, strap elasticity, and filtration efficiency. PAA and hydrogen peroxide off-gassing from treated masks were also assessed.
Results:
The PAA room HLDS was effective for disinfection of bacteriophage MS2 and G. stearothermophilus spores on respirators in a 2,447 cubic-foot (69.6 cubic-meter) room with an aerosol deployment time of 16 minutes and a dwell time of 32 minutes. The total cycle time was 1 hour and 16 minutes. After 5 treatment cycles, no adverse effects were detected on filtration efficiency, structural integrity, or strap elasticity. There was no detectable off-gassing of PAA and hydrogen peroxide from the treated masks at 20 and 60 minutes after the disinfection cycle, respectively.
Conclusion:
The PAA room disinfection system provides a rapidly scalable solution for in-hospital decontamination of large numbers of N95 respirators during the COVID-19 pandemic.
The C677T polymorphism in the folate metabolising enzyme methylenetetrahydrofolate reductase (MTHFR) is associated with hypertension. Riboflavin acts as a cofactor for MTHFR in one-carbon metabolism which generates methyl groups for utilisation in important biological reactions such as DNA methylation. Supplementation with riboflavin has previously been shown to lower blood pressure in individuals with the MTHFR 677TT genotype. The mechanism regulating this gene-nutrient interaction is currently unknown but may involve aberrant DNA methylation which has been implicated hypertension.
Objectives:
The aims of this study were to examine DNA methylation of hypertension-related genes in adults stratified by MTHFR C677T genotype and the effect of riboflavin supplementation on DNA methylation of these genes in individuals with the MTHFR 677TT genotype.
Materials and Methods:
We measured DNA methylation using pyrosequencing in a set of candidate genes associated with hypertension including angiotensin II receptor type 1 (AGTR1), G nucleotide binding protein subunit alpha 12 (GNA12), insulin-like growth factor 2 (IGF2) and nitric oxide synthase 3 (NOS3). Stored peripheral blood leukocyte samples from participants previously screened for the MTHFR C677T genotype who participated in targeted randomised controlled trials (1.6mg/d riboflavin or placebo for 16 weeks) at Ulster University were accessed for this analysis (n = 120).
Results:
There were significant differences in baseline average methylation between MTHFR CC and TT genotypes at NOS3 (p = 0.026) and AGTR1 (p = 0.045) loci. Riboflavin supplementation in the TT genotype group resulted in altered average methylation at IGF2 (p = 0.025) and CpG site-specific alterations at the AGTR1 and GNA12 loci.
Conclusion:
DNA methylation at genes related to hypertension were significantly different in individuals stratified by MTHFR genotype group. Furthermore, in MTHFR 677TT genotype individuals, there were concurrent alterations in DNA methylation at genes linked to hypertension in response to riboflavin supplementation. This is the largest study to date to demonstrate an interaction between DNA methylation of hypertension-related genes and riboflavin supplementation in adults with the MTHFR 677TT genotype. Further work using a genome-wide approach is required to better understand the role of riboflavin in altering DNA methylation in these genetically at-risk individuals.
Online self-reported 24-h dietary recall systems promise increased feasibility of dietary assessment. Comparison against interviewer-led recalls established their convergent validity; however, reliability and criterion-validity information is lacking. The validity of energy intakes (EI) reported using Intake24, an online 24-h recall system, was assessed against concurrent measurement of total energy expenditure (TEE) using doubly labelled water in ninety-eight UK adults (40–65 years). Accuracy and precision of EI were assessed using correlation and Bland–Altman analysis. Test–retest reliability of energy and nutrient intakes was assessed using data from three further UK studies where participants (11–88 years) completed Intake24 at least four times; reliability was assessed using intra-class correlations (ICC). Compared with TEE, participants under-reported EI by 25 % (95 % limits of agreement −73 % to +68 %) in the first recall, 22 % (−61 % to +41 %) for average of first two, and 25 % (−60 % to +28 %) for first three recalls. Correlations between EI and TEE were 0·31 (first), 0·47 (first two) and 0·39 (first three recalls), respectively. ICC for a single recall was 0·35 for EI and ranged from 0·31 for Fe to 0·43 for non-milk extrinsic sugars (NMES). Considering pairs of recalls (first two v. third and fourth recalls), ICC was 0·52 for EI and ranged from 0·37 for fat to 0·63 for NMES. EI reported with Intake24 was moderately correlated with objectively measured TEE and underestimated on average to the same extent as seen with interviewer-led 24-h recalls and estimated weight food diaries. Online 24-h recall systems may offer low-cost, low-burden alternatives for collecting dietary information.