We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The development of new papaya cultivars with high genetic potential for production, combined with quality traits that meet the demands of emerging markets, facilitates the expansion of the genetic base, reduces production costs, and broadens papaya cultivation. This study continues the largest Brazilian papaya breeding program, a partnership of over 25 years between UENF and Caliman Agrícola S.A. The objective was to evaluate the ability of inbred lines to generate hybrids with market potential in Brazil and for export. A total of 62 hybrids were obtained through topcross strategy. The lines were evaluated based on their specific combining ability (SCA), and the hybrids were analyzed through estimates of functional and varietal heterosis (VH) using three widely cultivated commercial varieties in Brazil: ‘UENF/CALIMAN 01’, ‘Tainung 01’, and ‘UC10’. Promising lines were identified for both hybrid creation and use as commercial varieties, exhibiting desirable traits for domestic and international markets, such as high fruit firmness and elevated soluble solids content (the lines UCLA08-053 and UCLA08-087 with the ‘Intermediate’ pattern and the lines UCLA08-066, UCLA08-122, and UCLA08-080 with the ‘Formosa’ pattern). Several hybrids, including H23, H26, H51, and H89 from the ‘Intermediate’ type and H4, H9, H19, and H68 from the ‘Formosa’ type, outperformed their parental lines and commercial varieties. These genotypes demonstrated superior SCA and VH compared to commercial controls, highlighting their strong genetic potential for the production market, increasing the shelf life of the fruits during storage and transportation, and allowing them to travel long distances without compromising fruit quality.
The governance of farm animal welfare is led, in certain countries and sectors, by industry organisations. The aim of this study was to analyse the legitimacy of industry-led farm animal welfare governance focusing on two examples: the Code of Practice for the Care and Handling of Dairy Cattle and the Animal Care module of the proAction programme in Canada, and the Animal Care module of the Farmers Assuring Responsible Management (FARM) programme in the United States (US). Both are dairy cattle welfare governance programmes led by industry actors who create the standards and audit farms for compliance. We described the normative legitimacy of these systems, based on an input, throughput, and output framework, by performing a document analysis on publicly available information from these organisations’ websites and found that the legitimacy of both systems was enhanced by their commitment to science, the presence of accountability systems to enforce standards, and wide participation by dairy farms. The Canadian system featured more balanced representation, and their standard development process uses a consensus-based model, which bolsters legitimacy compared to the US system. However, the US system was more transparent regarding audit outcomes than the Canadian system. Both systems face challenges to their legitimacy due to heavy industry representation and limited transparency as to how public feedback is addressed in the standards. These Canadian and US dairy industry standards illustrate strengths and weakness of industry-led farm animal welfare governance.
Comprehending resilience in the face of mental health issues is important, especially for young people who deal with a variety of psychological pressures. This study aims to investigate the co-occurrence of several mental health conditions and the role of resilience as a potential intervention in youth 14–25 years in the Nairobi metropolitan area. We recruited 1,972 youths. The following self-administered instruments were used: resilience (ARM-R), hopelessness (BHS), depression (BDI, PHQ-9), PTSD (HTQ), loneliness (UCLA Loneliness Scale) and suicidality (C-SSRS). Descriptive statistics, Pearson correlation and hierarchical multiple regression analyses were conducted on the data. The key findings are that depression and hopelessness showed a strong negative association with resilience. PTSD and recent suicidal ideation and behavior showed less negative association with resilience. Building resilience is an important intervention for the conditions reported in our study among the youth. This study contributes novel insights into the intersection of multiple psychological stressors and resilience, paving the way for more targeted, integrative mental health interventions.
To describe the real-world clinical impact of a commercially available plasma cell-free DNA metagenomic next-generation sequencing assay, the Karius test (KT).
Methods:
We retrospectively evaluated the clinical impact of KT by clinical panel adjudication. Descriptive statistics were used to study associations of diagnostic indications, host characteristics, and KT-generated microbiologic patterns with the clinical impact of KT. Multivariable logistic regression modeling was used to further characterize predictors of higher positive clinical impact.
Results:
We evaluated 1000 unique clinical cases of KT from 941 patients between January 1, 2017–August 31, 2023. The cohort included adult (70%) and pediatric (30%) patients. The overall clinical impact of KT was positive in 16%, negative in 2%, and no clinical impact in 82% of the cases. Among adult patients, multivariable logistic regression modeling showed that culture-negative endocarditis (OR 2.3; 95% CI, 1.11–4.53; P .022) and concern for fastidious/zoonotic/vector-borne pathogens (OR 2.1; 95% CI, 1.11–3.76; P .019) were associated with positive clinical impact of KT. Host immunocompromised status was not reliably associated with a positive clinical impact of KT (OR 1.03; 95% CI, 0.83–1.29; P .7806). No significant predictors of KT clinical impact were found in pediatric patients. Microbiologic result pattern was also a significant predictor of impact.
Conclusions:
Our study highlights that despite the positive clinical impact of KT in select situations, most testing results had no clinical impact. We also confirm diagnostic indications where KT may have the highest yield, thereby generating tools for diagnostic stewardship.
Multispecies Justice (MSJ) is a theory and practice seeking to correct the defects making dominant theories of justice incapable of responding to current and emerging planetary disruptions and extinctions. Multispecies Justice starts with the assumption that justice is not limited to humans but includes all Earth others, and the relationships that enable their functioning and flourishing. This Element describes and imagines a set of institutions, across all scales and in different spheres, that respect, revere, and care for the relationships that make life on Earth possible and allow all natural entities, humans included, to flourish. It draws attention to the prefigurative work happening within societies otherwise dominated by institutions characterised by Multispecies Injustice, demonstrating historical and ongoing practices of MSJ in different contexts. It then sketches speculative possibilities that expand on existing institutional reforms and are more fundamentally transformational. This title is also available as Open Access on Cambridge Core.
Archaeological sites in Northwest Africa are rich in human fossils and artefacts providing proxies for behavioural and evolutionary studies. However, these records are difficult to underpin on a precise chronology, which can prevent robust assessments of the drivers of cultural/behavioural transitions. Past investigations have revealed that numerous volcanic ash (tephra) layers are interbedded within the Palaeolithic sequences and likely originate from large volcanic eruptions in the North Atlantic (e.g. the Azores, Canary Islands, Cape Verde). Critically, these ash layers offer a unique opportunity to provide new relative and absolute dating constraints (via tephrochronology) to synchronise key archaeological and palaeoenvironmental records in this region. Here, we provide an overview of the known eruptive histories of the potential source volcanoes capable of widespread ashfall in the region during the last ~300,000 years, and discuss the diagnostic glass compositions essential for robust tephra correlations. To investigate the eruption source parameters and weather patterns required for ash dispersal towards NW Africa, we simulate plausible ashfall distributions using the Ash3D model. This work constitutes the first step in developing a more robust tephrostratigraphic framework for distal ash layers in NW Africa and highlights how tephrochronology may be used to reliably synchronise and date key climatic and cultural transitions during the Palaeolithic.
The eastern Democratic Republic of Congo (DRC) has faced dual burdens of poor mental health and heightened levels of violence against women and children within the home. Interventions addressing family violence prevention may offer a path to mitigate mental distress within the eastern DRC. This exploratory analysis uses data from a pilot cluster randomized controlled trial conducted in North Kivu, DRC, assessing the impact of Safe at Home, a violence prevention intervention. Mental health was assessed at endline using the Patient Health Questionnaire-4. Statistical analyses employed multilevel linear regression.
Assuming successful randomization, impact of the Safe at Home intervention on mental health differed by participant gender. Women enrolled in the Safe at Home intervention reported statistically significant decreases in mental distress symptoms [β (95%CI) = −1.01 (−1.85, −0.17)], whereas men enrolled in Safe at Home had similar scores in mental distress to the control group [β (95%CI) = −0.12 (−1.32, 1.06)].
Ultimately, this exploratory analysis provides evidence of the potential for a family violence prevention model to improve women’s mental health in a low-resource, conflict-affected setting, although further research is needed to understand the impact on men’s mental health.
There are numerous challenges pertaining to epilepsy care across Ontario, including Epilepsy Monitoring Unit (EMU) bed pressures, surgical access and community supports. We sampled the current clinical, community and operational state of Ontario epilepsy centres and community epilepsy agencies post COVID-19 pandemic. A 44-item survey was distributed to all 11 district and regional adult and paediatric Ontario epilepsy centres. Qualitative responses were collected from community epilepsy agencies. Results revealed ongoing gaps in epilepsy care across Ontario, with EMU bed pressures and labour shortages being limiting factors. A clinical network advising the Ontario Ministry of Health will improve access to epilepsy care.
Despite infection control guidance, sporadic nosocomial coronavirus disease 2019 (COVID-19) outbreaks occur. We describe a complex severe acute respiratory coronavirus virus 2 (SARS-CoV-2) cluster with interfacility spread during the SARS-CoV-2 δ (delta) pandemic surge in the Midwest.
Setting:
This study was conducted in (1) a hematology-oncology ward in a regional academic medical center and (2) a geographically distant acute rehabilitation hospital.
Methods:
We conducted contact tracing for each COVID-19 case to identify healthcare exposures within 14 days prior to diagnosis. Liberal testing was performed for asymptomatic carriage for patients and staff. Whole-genome sequencing was conducted for all available clinical isolates from patients and healthcare workers (HCWs) to identify transmission clusters.
Results:
In the immunosuppressed ward, 19 cases (4 patients, 15 HCWs) shared a genetically related SARS-CoV-2 isolate. Of these 4 patients, 3 died in the hospital or within 1 week of discharge. The suspected index case was a patient with new dyspnea, diagnosed during preprocedure screening. In the rehabilitation hospital, 20 cases (5 patients and 15 HCWs) positive for COVID-19, of whom 2 patients and 3 HCWs had an isolate genetically related to the above cluster. The suspected index case was a patient from the immune suppressed ward whose positive status was not detected at admission to the rehabilitation facility. Our response to this cluster included the following interventions in both settings: restricting visitors, restricting learners, restricting overflow admissions, enforcing strict compliance with escalated PPE, access to on-site free and frequent testing for staff, and testing all patients prior to hospital discharge and transfer to other facilities.
Conclusions:
Stringent infection control measures can prevent nosocomial COVID-19 transmission in healthcare facilities with high-risk patients during pandemic surges. These interventions were successful in ending these outbreaks.
Recent conceptualizations of concussion symptoms have begun to shift from a latent perspective (which suggests a common cause; i.e., head injury), to a network perspective (where symptoms influence and interact with each other throughout injury and recovery). Recent research has examined the network structure of the Post-Concussion Symptom Scale (PCSS) cross-sectionally at pre-and post-concussion, with the most important symptoms including dizziness, sadness, and feeling more emotional. However, within-subject comparisons between network structures at pre-and post-concussion have yet to be made. These analyses can provide invaluable information on whether concussion alters symptom interactions. This study examined within-athlete changes in PCSS network connectivity and centrality (the importance of different symptoms within the networks) from baseline to post-concussion.
Participants and Methods:
Participants were selected from a larger longitudinal database of high school athletes who completed the PCSS in English as part of their standard athletic training protocol (N=1,561). The PCSS is a 22-item self-report measure of common concussion symptoms (i.e., headache, vomiting, dizziness, etc.) in which individuals rate symptom severity on a 7-point Likert scale. Participants were excluded if they endorsed history of brain surgery, neurodevelopmental disorder, or treatment history for epilepsy, migraines, psychiatric disorders, or alcohol/substance use. Network analysis was conducted on PCSS ratings from a baseline and acute post-concussion (within 72-hours post-injury) assessment. In each network, the nodes represented individual symptoms, and the edges connecting them their partial correlations. Estimations of the regularized partial correlation networks were completed using the Gaussian graphical model, and the GLASSO algorithm was used for regularization. Each symptom’s expected influence (the sum of its partial correlations with other symptoms) was calculated to identify the most central symptoms in each network. Recommended techniques from Epskamp et al. (2018) were completed for assessing the accuracy of the estimated symptom importance and relationships. Network Comparison Tests were conducted to observe changes in network connectivity, structure, and node influence.
Results:
Both baseline and acute post-concussion networks contained negative and positive relationships. The expected influence of symptoms was stable in both networks, with difficulty concentrating having the greatest expected influence in both. The strongest edges in the networks were between symptoms within similar domains of functioning (e.g., sleeping less was associated with trouble falling asleep). Network connectivity was not significantly different between networks (S=0.43), suggesting the overall degree to which symptoms are related was not different at acute post-concussion. Network structure significantly differed at acute post-concussion (M=0.305), suggesting specific relationships in the acute post-concussion network were different than they were at baseline. In the acute post concussion network, vomiting was less central and sensitivity to noise and mentally foggy more central.
Conclusions:
PCSS network structure at acute post-concussion is altered, suggesting concussion may disrupt symptom networks and certain symptoms’ associations with the experience of others after sustaining a concussive injury. Future research should compare PCSS networks later in recovery to examine if similar structural changes remain or return to baseline structure, with the potential that observing PCSS network structure changes post-concussion could inform symptom resolution trajectories.
Previous studies have found differences between monolingual and bilingual athletes on ImPACT, the most widely used sport-related concussion (SRC) assessment measure. Most recently, results suggest that monolingual English-Speaking athletes outperformed bilingual English- and Spanish-speaking athletes on Visual Motor Speed and Reaction Time composites. Before further investigation of these differences can occur, measurement invariance of ImPACT must be established to ensure that differences are not attributable to measurement error. The current study aimed to 1) replicate a recently identified four-factor model using cognitive subtest scores of ImPACT on baseline assessments in monolingual English-Speaking athletes and bilingual English- and Spanish-speaking athletes and 2) to establish measurement invariance across groups.
Participants and Methods:
Participants included high school athletes who were administered the ImPACT as part of their standard pre-season athletic training protocol in English. Participants were excluded if they had a self-reported history of concussion, Autism, ADHD, learning disability or treatment history of epilepsy/seizures, brain surgery, meningitis, psychiatric disorders, or substance/alcohol use. The final sample included 7,948 monolingual English-speaking athletes and 7,938 bilingual English- and Spanish-speaking athletes with valid baseline assessments. Language variables were based on self-report. As the number of monolingual athletes was substantially larger than the number of bilingual athletes, monolingual athletes were randomly selected from a larger sample to match the bilingual athletes on age, sex, and sport. Confirmatory factor analysis (CFA) was used to test competing models, including one-factor, two-factor, and three-factor models to determine if a recently identified four-factor model (Visual Memory, Visual Reaction Time, Verbal Memory, Working Memory) provided the best fit of the data. Eighteen subtest scores from ImPACT were used in the CFAs. Through increasingly restrictive multigroup CFAs (MGCFA), configural, metric, scalar, and residual levels of invariance were assessed by language group.
Results:
CFA indicated that the four-factor model provided the best fit in the monolingual and bilingual samples compared to competing models. However, some goodness-of-fit-statistics were below recommended cutoffs, and thus, post-hoc model modifications were made on a theoretical basis and by examination of modification indices. The modified four-factor model had adequate to superior fit and met criteria for all goodness-of-fit indices and was retained as the configural model to test measurement invariance across language groups. MGCFA revealed that residual invariance, the strictest level of invariance, was achieved across groups.
Conclusions:
This study provides support for a modified four-factor model as estimating the latent structure of ImPACT cognitive scores in monolingual English-speaking and bilingual English- and Spanish-speaking high school athletes at baseline assessment. Results further suggest that differences between monolingual English-speaking and bilingual English- and Spanish-speaking athletes reported in prior ImPACT studies are not caused by measurement error. The reason for these differences remains unclear but are consistent with other studies suggesting monolingual advantages. Given the increase in bilingual individuals in the United States, and among high school athletics, future research should investigate other sources of error such as item bias and predictive validity to further understand if group differences reflect real differences between these athletes.
Hippocampal pathology is a consistent feature in persons with temporal lobe epilepsy (TLE) and a strong biomarker of memory impairment. Histopathological studies have identified selective patterns of cell loss across hippocampal subfields in TLE, the most common being cellular loss in the cornu ammonis 1 (CA1) and dentage gyrus (DG). Structural neuroimaging provides a non-invasive method to understand hippocampal pathology, but traditionally only at a whole-hippocampal level. However, recent methodological advances have enabled the non-invasive quantification of subfield pathology in patients, enabling potential integration into clinical workflow. In this study, we characterize patterns of hippocampal subfield atrophy in patients with TLE and examine the associations between subfield atrophy and clinical characteristics.
Participants and Methods:
High-resolution T2 and T1-weighted MRI were collected from 31 participants (14 left TLE; 6 right TLE; 11 healthy controls [HC], aged 18-61 years). Reconstructions of hippocampal subfields and estimates of their volumes were derived using the Automated Segmentation of Hippocampal Subfields (ASHS) pipeline. Total hippocampal volume was calculated by combining estimates of the subfields CA1-3, DG, and subiculum. To control for variations in head size, all volume estimates were divided by estimates of total brain volume. To assess disease effects on hippocampal atrophy, hippocampi were recoded as either ipsilateral or contralateral to the side of seizure focus. Two sample t-tests at a whole-hippocampus level were used to test for ipsilateral and contralateral volume loss in patients relative to HC. To assess whether we replicated the selective histopathological patterns of subfield atrophy, we carried out mixed-effects ANOVA, coding for an interaction between diagnostic group and hippocampal subfield. Finally, to assess effects of disease load, non-parametric correlations were performed between subfield volume and age of first seizure and duration of illness.
Results:
Patients had significantly smaller total ipsilateral hippocampal volume compared with HC (d=1.23, p<.005). Contralateral hippocampus did not significantly differ between TLE and HC. Examining individual subfields for the ipsilateral hemisphere revealed significant main-effects for group (F(1, 29)=8.2, p<0.01), subfields (F(4, 115)=550.5, p<0.005), and their interaction (F(4, 115)=8.1, p<0.001). Post-hoc tests revealed that TLE had significantly smaller volume in the ipsilateral CA1 (d=-2.0, p<0.001) and DG (d = -1.4, p<0.005). Longer duration of illness was associated with smaller volume of ipsilateral CA2 (p=-0.492, p<0.05) and larger volume of contralateral whole-hippocampus (p=0.689, p<0.001), CA1 (p=0.614, p < 0.005), and DG (p=0.450, p<0.05).
Conclusions:
Histopathological characterization after surgery has revealed important associations between hippocampal subfield cell loss and memory impairments in patients with TLE. Here we demonstrate that non-invasive neuroimaging can detect a pattern of subfield atrophy in TLE (i.e., CA1/DG) that matches the most common form of histopathologically-observed hippocampal sclerosis in TLE (HS Type 1) and has been linked directly to both verbal and visuospatial memory impairment. Finally, we found evidence that longer disease duration is associated with larger contralateral hippocampal volume, driven by increases in CA1 and DG. This may reflect subfield-specific functional reorganization to the unaffected brain tissue, a compensatory effect which may have important implications for patient function and successful treatment outcomes.
The prevalence and patterns of autism spectrum disorder (ASD) symptoms/traits and the associations of ASD with psychiatric and substance use disorders has not been documented in non-clinical students in Sub-Saharan Africa, and Kenya in particular.
Aims
To document the risk level of ASD and its traits in a Kenyan student population (high school, college and university) using the Autism-Spectrum Quotient (AQ); and to determine the associations between ASD and other psychiatric and substance use disorders.
Method
This was a cross-sectional study among students (n = 9626). We used instruments with sufficient psychometric properties and good discriminative validity to collect data. A cut-off score of ≥32 on the AQ was used to identify those at high risk of ASD. We conducted the following statistical tests: (a) basic descriptive statistics; (b) chi-squared tests and Fisher's exact tests to analyse associations between categorical variables and ASD; (c) independent t-tests to examine two-group comparisons with ASD; (d) one-way analysis of variance to make comparisons between categorical variables with three or more groups and ASD; (e) statistically significant (P < 0.05) variables fitted into an ordinal logistic regression model to identify determinants of ASD; (f) Pearson's correlation and reliability analysis.
Results
Of the total sample, 54 (0.56%) were at high risk of ASD. Sociodemographic differences were found in the mean scores for the various traits, and statistically significant (P < 0.05) associations we found between ASD and various psychiatric and substance use disorders.
Conclusions
Risk of ASD, gender characteristics and associations with psychiatric and substance use disorders are similar in this Kenyan sample to those found in Western settings in non-clinical populations.
Over the past years, the Outaouais region (Quebec, Canada) and their residents have had to endure no less than five natural disasters (floods, tornadoes). These disasters are likely to have a variety of consequences on the physical and mental health of adolescents, as well as on their personal, family, school and social lives. The experiences of teenagers are also likely to vary depending on whether they live in rural or urban areas.
Method:
Data were collected via a self-administered questionnaire in February 2022. A total of 1307 teenagers from two high schools participated in the study by completing an online survey. The questionnaire measured various aspects of the youth's mental health using validated tests, such as manifestations of post-traumatic stress, anxiety and depression, as well as the presence of suicidal thoughts and self-harm. Other aspects of the youth's experience were measured, including their level of social support, school engagement, alcohol and drug use, and coping strategies.
Results:
One third of young students (n=1307) were experiencing depressive symptoms and suicidal thoughts, as well as significant daily stress. More than 25% of the students had moderate or severe anxiety and thoughts of self-harm. These problems were significantly more prevalent among youths with prior exposure to a natural disaster. The study data also revealed that youths living in rural areas had a more worrying profile than those living in urban areas.
Conclusion:
Similar to other studies (Ran et al., 2015; Stratta et al., 2014), our research data revealed that youths living in rural areas presented a more concerning profile than those residing in urban areas. It therefore seems important, in future studies and services, to focus more specifically on these teenagers to better understand their needs and to develop adapted services more likely to meet them.
Reward processing has been proposed to underpin the atypical social feature of autism spectrum disorder (ASD). However, previous neuroimaging studies have yielded inconsistent results regarding the specificity of atypicalities for social reward processing in ASD.
Aims
Utilising a large sample, we aimed to assess reward processing in response to reward type (social, monetary) and reward phase (anticipation, delivery) in ASD.
Method
Functional magnetic resonance imaging during social and monetary reward anticipation and delivery was performed in 212 individuals with ASD (7.6–30.6 years of age) and 181 typically developing participants (7.6–30.8 years of age).
Results
Across social and monetary reward anticipation, whole-brain analyses showed hypoactivation of the right ventral striatum in participants with ASD compared with typically developing participants. Further, region of interest analysis across both reward types yielded ASD-related hypoactivation in both the left and right ventral striatum. Across delivery of social and monetary reward, hyperactivation of the ventral striatum in individuals with ASD did not survive correction for multiple comparisons. Dimensional analyses of autism and attention-deficit hyperactivity disorder (ADHD) scores were not significant. In categorical analyses, post hoc comparisons showed that ASD effects were most pronounced in participants with ASD without co-occurring ADHD.
Conclusions
Our results do not support current theories linking atypical social interaction in ASD to specific alterations in social reward processing. Instead, they point towards a generalised hypoactivity of ventral striatum in ASD during anticipation of both social and monetary rewards. We suggest this indicates attenuated reward seeking in ASD independent of social content and that elevated ADHD symptoms may attenuate altered reward seeking in ASD.
Abnormal tau, a hallmark Alzheimer’s disease (AD) pathology, may appear in the locus coeruleus (LC) decades before AD symptom onset. Reports of subjective cognitive decline are also often present prior to formal diagnosis. Yet, the relationship between LC structural integrity and subjective cognitive decline has remained unexplored. Here, we aimed to explore these potential associations.
Methods:
We examined 381 community-dwelling men (mean age = 67.58; SD = 2.62) in the Vietnam Era Twin Study of Aging who underwent LC-sensitive magnetic resonance imaging and completed the Everyday Cognition scale to measure subjective cognitive decline along with their selected informants. Mixed models examined the associations between rostral-middle and caudal LC integrity and subjective cognitive decline after adjusting for depressive symptoms, physical morbidities, and family. Models also adjusted for current objective cognitive performance and objective cognitive decline to explore attenuation.
Results:
For participant ratings, lower rostral-middle LC contrast to noise ratio (LCCNR) was associated with significantly greater subjective decline in memory, executive function, and visuospatial abilities. For informant ratings, lower rostral-middle LCCNR was associated with significantly greater subjective decline in memory only. Associations remained after adjusting for current objective cognition and objective cognitive decline in respective domains.
Conclusions:
Lower rostral-middle LC integrity is associated with greater subjective cognitive decline. Although not explained by objective cognitive performance, such a relationship may explain increased AD risk in people with subjective cognitive decline as the LC is an important neural substrate important for higher order cognitive processing, attention, and arousal and one of the first sites of AD pathology.
High-quality evidence from prospective longitudinal studies in humans is essential to testing hypotheses related to the developmental origins of health and disease. In this paper, the authors draw upon their own experiences leading birth cohorts with longitudinal follow-up into adulthood to describe specific challenges and lessons learned. Challenges are substantial and grow over time. Long-term funding is essential for study operations and critical to retaining study staff, who develop relationships with participants and hold important institutional knowledge and technical skill sets. To maintain contact, we recommend that cohorts apply multiple strategies for tracking and obtain as much high-quality contact information as possible before the child’s 18th birthday. To maximize engagement, we suggest that cohorts offer flexibility in visit timing, length, location, frequency, and type. Data collection may entail multiple modalities, even at a single collection timepoint, including measures that are self-reported, research-measured, and administrative with a mix of remote and in-person collection. Many topics highly relevant for adolescent and young adult health and well-being are considered to be private in nature, and their assessment requires sensitivity. To motivate ongoing participation, cohorts must work to understand participant barriers and motivators, share scientific findings, and provide appropriate compensation for participation. It is essential for cohorts to strive for broad representation including individuals from higher risk populations, not only among the participants but also the staff. Successful longitudinal follow-up of a study population ultimately requires flexibility, adaptability, appropriate incentives, and opportunities for feedback from participants.
In the immediate aftermath of a disaster, household members may experience lack of support services and isolation from one another. To address this, a common recommendation is to promote preparedness through the preparation of an emergency supply kit (ESK). The goal was to characterize ESK possession on a national level to help the Centers for Disease Control and Prevention (CDC) guide next steps to better prepare for and respond to disasters and emergencies at the community level.
Methods:
The authors analyzed data collected through Porter Novelli’s ConsumerStyles surveys in fall 2020 (n = 3625) and spring 2021 (n = 6455).
Results:
ESK ownership is lacking. Overall, while most respondents believed that an ESK would help their chance of survival, only a third have one. Age, gender, education level, and region of the country were significant predictors of kit ownership in a multivariate model. In addition, there was a significant association between level of preparedness and ESK ownership.
Conclusions:
These data are an essential starting point in characterizing ESK ownership and can be used to help tailor public messaging, inform work with partners to increase ESK ownership, and guide future research.