We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Approximately 95% of patients with a beta-lactam allergy noted in their medical record are not truly allergic when tested. These patients may unnecessarily avoid first-line antibiotics, resulting in increased treatment failure, higher costs, and antibiotic resistance. Bone marrow transplant (BMT) patients may be at higher risk for these adverse outcomes due to weakened immune systems and high risk for severe infections. Our objective was to evaluate beta-lactam allergy labels and their influence on BMT patient outcomes.
Methods:
We conducted a retrospective cohort study of adult inpatients undergoing BMT during April 2018-March 2020. Eligibility for penicillin allergy testing/de-labeling was evaluated. Multivariable logistic regression was performed to measure independent effects of beta-lactam allergy labels on 100-day outcomes: mortality, ICU admission, rehospitalization, and intravenous antibiotic use.
Results:
Among 358 BMT patients, 75 (21%) had a beta-lactam allergy label at baseline. Mortality was higher in patients with an allergy label (14.7% vs 7.8%, P = 0.067). In multivariable analysis, patients with allergy labels were not at a significantly greater risk of mortality (OR = 1.60; 95% CI = 0.68 – 3.78) but were significantly more likely to receive carbapenems (OR = 6.27; 95% CI = 2.81–13.98). All patients with penicillin-class allergy labels were eligible for allergy testing/de-labeling.
Conclusion:
We did not observe a significant increased risk of mortality in BMT patients with beta-lactam allergy labels; however, increased carbapenem use was observed. Penicillin allergy de-labeling programs may help optimize antibiotic prescribing in BMT patients. Larger studies are needed to quantify the impact of beta-lactam allergy labels on BMT patient outcomes.
Yellow nutsedge (Cyperus esculentus L.) is one of the most problematic weeds in turfgrass due to its fast growth rate and high tuber production. Effective long-term control relies on translocation of systemic herbicides to underground tubers. Two identical trials were conducted simultaneously in separate greenhouses to evaluate the effect of several acetolactate synthase (ALS)- and protoporphyrinogen oxidase (PPO)-inhibiting postemergence herbicides on C. esculentus tuber production and viability. Seven tubers were planted into 1-L pots, and plants were allowed to mature for 6 wk before trial initiation. Treatments included pyrimisulfan at 73 g ai ha−1 once or 49 g ai ha−1 twice, imazosulfuron at 736 g ai ha−1 once or 420 g ai ha−1 twice, carfentrazone-ethyl + sulfentrazone at 22 + 198 g ai ha−1 once or 14 + 127 g ai ha−1 twice, halosulfuron at 70 g ai ha−1 once or 35 g ai ha−1 twice, and a nontreated control. Sequential applications were made 3 wk after initial treatment (WAIT) for both trials. Both single and sequential applications of carfentrazone-ethyl + sulfentrazone exhibited the quickest control (80% to 83% 4 WAIT). Two applications of imazosulfuron resulted in the greatest reduction in tuber number (81%) and tuber dry biomass (85%), while one application of carfentrazone-ethyl + sulfentrazone resulted in the greatest reduction in shoot biomass (71%). The viability of tubers that were recovered from each pot was reduced 48% to 70%, with the greatest reduction in response to carfentrazone-ethyl + sulfentrazone. Although two applications of pyrimisulfan only resulted in tuber number and shoot biomass reductions of 66% and 38%, respectively, tuber dry biomass reduction was 80%. Therefore, pyrimisulfan, imazosulfuron, halosulfuron, and carfentrazone-ethyl + sulfentrazone are all viable options for long-term C. esculentus control in turfgrass.
Black communities in the United States experience disproportionate rates of adverse health. In this chapter, we discuss the importance of culturally sensitive, empowerment-focused health promotion programs in Black communities anchored in the community-based participatory research (CBPR) approach and/or the patient-centered culturally sensitive health care (PC-CSHC) model. One program is the Health-Smart Holistic Health Program for Black Seniors, which is an ongoing, multiyear program designed to promote physical activity and health eating and reduce social isolation, food insecurity, and financial insecurity among older Black adults in low-income communities. The second program is the Health-Smart for Weight Loss Program, which is a cluster randomized controlled trial targeting Black women with obesity that tested (a) the impact of evidence-based, patient-empowerment-focused weight loss program and (b) the comparative effectiveness of a patient-centered culturally sensitive weight loss maintenance program versus a standard behavioral weight loss maintenance program. The results support use of patient-centered, culturally sensitive, and community-based participatory approaches to improve health outcomes in Black communities.
Diagnostic criteria for major depressive disorder allow for heterogeneous symptom profiles but genetic analysis of major depressive symptoms has the potential to identify clinical and etiological subtypes. There are several challenges to integrating symptom data from genetically informative cohorts, such as sample size differences between clinical and community cohorts and various patterns of missing data.
Methods
We conducted genome-wide association studies of major depressive symptoms in three cohorts that were enriched for participants with a diagnosis of depression (Psychiatric Genomics Consortium, Australian Genetics of Depression Study, Generation Scotland) and three community cohorts who were not recruited on the basis of diagnosis (Avon Longitudinal Study of Parents and Children, Estonian Biobank, and UK Biobank). We fit a series of confirmatory factor models with factors that accounted for how symptom data was sampled and then compared alternative models with different symptom factors.
Results
The best fitting model had a distinct factor for Appetite/Weight symptoms and an additional measurement factor that accounted for the skip-structure in community cohorts (use of Depression and Anhedonia as gating symptoms).
Conclusion
The results show the importance of assessing the directionality of symptoms (such as hypersomnia versus insomnia) and of accounting for study and measurement design when meta-analyzing genetic association data.
A healthcare-associated group A Streptococcus outbreak involving six patients, four healthcare workers, and one household contact occurred in the labor and delivery unit of an academic medical center. Isolates were highly related by whole genome sequencing. Infection prevention measures, healthcare worker screening, and chemoprophylaxis of those colonized halted further transmission.
Depression is the largest global contributor to non-fatal disease burden(1). A growing body of evidence suggests that dietary behaviours, such as higher fruit and vegetable intake, may be protective against the risk of depression(2). However, this evidence is primarily from high-income countries, despite over 80% of the burden of depression being experienced in low- and middle-income countries(1). There are also limited studies to date focusing on older adults. The aim of this study was to prospectively examine the associations between baseline fruit and vegetable intake and incidence of depression in adults aged 45-years and older from 10 cohorts across six continents, including four cohorts from low and middle-income countries. The association between baseline fruit and vegetable intake and incident depression over a 3–6-year follow-up period was examined using Cox proportional hazard regression after controlling for a range of potential confounders. Participants were 7771 community-based adults aged 45+ years from 10 diverse cohorts. All cohorts were members of the Cohort Studies of Memory in an International Consortium collaboration(3). Fruit intake (excluding juice) and vegetable intake was collected using either a comprehensive food frequency questionnaire, short food questionnaire or diet history. Depressive symptoms were assessed using validated depression measures, and depression was defined as a score greater than or equal to a validated cut-off. Prior to analysis all data were harmonised. Analysis was performed by cohort and then cohort results were combined using meta-analysis. Subgroup analysis was performed by sex, age (45 – 64 versus 65+ years) and income level of country (high income countries versus low- and middle-income countries). There were 1537 incident cases of depression over 32,420 person-years of follow-up. Mean daily intakes of fruit were 1.7 ± 1.5 serves and vegetables 1.9 ± 1.4. serves. We found no association between fruit and vegetable intakes and risk of incident depression in any of the analyses, and this was consistent across the subgroup analyses. The low intake of fruit and vegetables of participants, diverse measures used across the different cohorts, and modest sample size of our study compared with prior studies in the literature, may have prevented an association being detected. Further investigation using standardised measures in larger cohorts of older adults from low- to middle-income countries is needed. Future research should consider the potential relationship between different types of fruits and vegetables and depression.
The brain can be represented as a network, with nodes as brain regions and edges as region-to-region connections. Nodes with the most connections (hubs) are central to efficient brain function. Current findings on structural differences in Major Depressive Disorder (MDD) identified using network approaches remain inconsistent, potentially due to small sample sizes. It is still uncertain at what level of the connectome hierarchy differences may exist, and whether they are concentrated in hubs, disrupting fundamental brain connectivity.
Methods
We utilized two large cohorts, UK Biobank (UKB, N = 5104) and Generation Scotland (GS, N = 725), to investigate MDD case–control differences in brain network properties. Network analysis was done across four hierarchical levels: (1) global, (2) tier (nodes grouped into four tiers based on degree) and rich club (between-hub connections), (3) nodal, and (4) connection.
Results
In UKB, reductions in network efficiency were observed in MDD cases globally (d = −0.076, pFDR = 0.033), across all tiers (d = −0.069 to −0.079, pFDR = 0.020), and in hubs (d = −0.080 to −0.113, pFDR = 0.013–0.035). No differences in rich club organization and region-to-region connections were identified. The effect sizes and direction for these associations were generally consistent in GS, albeit not significant in our lower-N replication sample.
Conclusion
Our results suggest that the brain's fundamental rich club structure is similar in MDD cases and controls, but subtle topological differences exist across the brain. Consistent with recent large-scale neuroimaging findings, our findings offer a connectomic perspective on a similar scale and support the idea that minimal differences exist between MDD cases and controls.
Pediatric cancer and cancer-related treatments may disrupt brain development and place survivors at risk for long term problems with cognitive functions. Processing efficiency has been operationalized as a nuanced cognitive skill that reflects both processing speed (PS) and working memory (WM) abilities and is sensitive to neurobiological disruption. Pediatric cancer survivors are at risk for processing efficiency deficits; however, a thorough characterization of processing efficiency skills across pediatric primary central nervous system (CNS) tumor and non-CNS cancer survivors has not yet been reported.
Participants and Methods:
Participants were selected from a mixed retrospective clinical database of pediatric cancer survivors (Total n=160; primary CNS tumor n=33; Non-CNS n=127). Univariate analyses were conducted to examine differences in processing efficiency mean scores (t-tests) and percent impairment (scores >1 SD below mean; chi-squared tests) between the total sample and normative sample, and across groups (CNS vs. Non-CNS). Multiple linear regressions were utilized to evaluate the relationships between additional risk factors, including biological sex, age at diagnosis, time since treatment, and socioeconomic status, and processing efficiency outcomes.
Results:
The total sample obtained lower scores on WM (M=90.83, SD=13.35) and PS (M=88.86, SD=14.38) measures than normative samples (M=100, SD=15), p < 0.001. Greater percentage of pediatric cancer survivors demonstrated impairment across all processing efficiency measures (24.8-38.1%) than normative samples (15.9%), p < 0.001. Regarding group differences, the CNS group obtained lower mean WM (M=84.85, SD =11.77) and PS (M=80, SD=14.18) scores than the Non-CNS group (WM M=92.39, SD=13.32; PS M=91.16, SD=13.56), p < 0.001. Rates of impairment between groups only differed for PS scores, with 63.6% of the CNS group and 31.5% of the non-CNS group demonstrating impairment, p < 0.001. Primary CNS tumor cancer type and male biological sex emerged as the only significant risk factors that predicted processing efficiency skills, with male sex predicting lower scores on PS (ß=8.91 p<.001) and semantic fluency (ß=7.59, p=.007).
Conclusions:
These findings indicate that both pediatric primary CNS tumor and non-CNS cancer survivors exhibit substantial weaknesses in processing efficiency skills after treatment. While both groups demonstrated deficits compared to normative samples, the CNS group was more susceptible to PS impairments than non-CNS group. A basic initial study of the relationships between risk factors and processing efficiency skills revealed that primary CNS cancer was a predictor of lower performance on working memory and processing speed measures, while male biological sex was a significant risk factor for worse performance on processing speed and semantic fluency measures. Continued focus on the construct of processing efficiency in pediatric cancer survivors is warranted. Applying a standardized approach to assessing and communicating this nuanced cognitive skill could contribute to advancing both clinical practice and outcomes research of pediatric cancer survivors.
Digit Span has been a core Working Memory task, with extensive research conducted on the Forward and Backward components. The latest revision of the WAIS-IV introduced the Sequencing component, designed to increase the working memory and mental manipulation demands. However, relatively little research has been done to understand how Sequencing can be interpreted in clinical settings, as compared to Forward and Backward. The purpose of this study was to investigate how effectively individual components of the Digit Span task predict performance on four independent neuropsychological measures with high working memory demands.
Participants and Methods:
Subjects included 148 adults (Age: M= 39.22, SD= 13.61; Handedness= 130 right, 10 left and 8 mixed; Males = 88) with refractory epilepsy. Two subjects had primary generalized seizures while 146 subjects had complex partial seizures (EEG Localization: 44 right temporal; 60 left temporal; 24 independent bitemporal; 1 left extratemporal; 17 indeterminant). Dependent variables included the 2.4 second ISI trial of the Paced Auditory Serial Addition Task (PASAT); the sum of correct responses on Trial 1 and List B of the California Verbal Learning Test (CVLT); the DKEFS Tower Test raw score; and completion time on Part B of the Trail Making Test. The independent variables included the individual raw scores for the Forward, Backward and Sequencing components of the WAIS-IV. Hierarchical linear regression was conducted to determine the variance accounted for by each component of the Digit Span and if that variance was redundant or unique. The four dependent variables were analyzed separately with Digits Forward, Backward and Sequencing entered in a single block.
Results:
PASAT: The overall model was significant, R2= 0.36. When examining the individual components, Sequencing was the only significant predictor (ß = 0.422, p < 0.001). CVLT: The overall model was significant, R2 = 0.203. When examining the individual components, Sequencing was the only significant predictor (ß = 0.410, p < 0.001). Tower Test: The overall model was significant, R2 = 0.176. When examining the individual components, Sequencing was the only significant predictor (ß = 0.373, p = 0.004). Trail Making: The overall model was significant R2 = 0.315. When examining the individual components both Forward (ß = -0.287, p =0.005) and Sequencing (ß= -0.364, p < 0.001) accounted for a significant amount of the variance.
Conclusions:
The combined model for Digit Span accounted for significant amounts of variance in performance on all dependent measures, ranging from 17.6% to 36%. Sequencing accounted for substantially more variance across all examined tasks. On the PASAT, CVLT and Tower Test, the variance accounted for by the components of Digit Span appears to be redundant. However, on Trail Making, both Forward and Sequencing accounted for significant amounts of variance that appear to be independent of one another. What specific task requirement(s) of the Trail Making Test versus the other measures analyzed are accounted for by Forward span is not clear. But this suggests that the individual components of the Digit Span test may measure different things across different tasks.
As the scale of cosmological surveys increases, so does the complexity in the analyses. This complexity can often make it difficult to derive the underlying principles, necessitating statistically rigorous testing to ensure the results of an analysis are consistent and reasonable. This is particularly important in multi-probe cosmological analyses like those used in the Dark Energy Survey (DES) and the upcoming Legacy Survey of Space and Time, where accurate uncertainties are vital. In this paper, we present a statistically rigorous method to test the consistency of contours produced in these analyses and apply this method to the Pippin cosmological pipeline used for type Ia supernova cosmology with the DES. We make use of the Neyman construction, a frequentist methodology that leverages extensive simulations to calculate confidence intervals, to perform this consistency check. A true Neyman construction is too computationally expensive for supernova cosmology, so we develop a method for approximating a Neyman construction with far fewer simulations. We find that for a simulated dataset, the 68% contour reported by the Pippin pipeline and the 68% confidence region produced by our approximate Neyman construction differ by less than a percent near the input cosmology; however, they show more significant differences far from the input cosmology, with a maximal difference of 0.05 in $\Omega_{M}$ and 0.07 in w. This divergence is most impactful for analyses of cosmological tensions, but its impact is mitigated when combining supernovae with other cross-cutting cosmological probes, such as the cosmic microwave background.
A multisite research team proposed a survey to assess burnout among healthcare epidemiologists. Anonymous surveys were disseminated to eligible staff at SRN facilities. Half of the respondents were experiencing burnout. Staffing shortages were a key stressor. Allowing healthcare epidemiologists to provide guidance without directly enforcing policies may improve burnout.
Naturalistic online grocery stores could provide a novel setting for evaluating nutrition interventions. In 2021–2022, we recruited US adults (n 144, 59% low-income) to complete two weekly study visits: one in a naturalistic (‘mock’) online grocery store developed for research and one in a real online grocery store. Participants selected groceries and responded to survey questions. Analyses examined survey responses and expenditures on fifteen food categories (e.g., bread, sugar-sweetened beverages). Nearly all enrolled participants completed both visits (98% retention). Moreover, nearly participants all reported that their selections in the naturalistic store were similar to their usual purchases (95%) and that the naturalistic store felt like a real store (92%). Participants’ spending on food categories in the naturalistic store were moderately-to-strongly correlated with their spending in the real store (range of correlation coefficients: 0⋅36–0⋅67, all P-values < 0⋅001). Naturalistic online grocery stores may offer a promising platform for conducting nutrition research.
It is widely agreed that there is a crisis in labour/employment standards enforcement. A key issue is the role of deterrence measures that penalise violations. Employment standards enforcement in Ontario, like in most jurisdictions, is based mainly on a compliance framework promoting voluntary resolution of complaints and, if that fails, ordering restitution. Deterrence measures that penalise violations are rarely invoked. However, the Ontario government has recently increased the role of proactive inspections and tickets, a low-level deterrence measure which imposes fines of CAD295 plus victim surcharges. In examining the effectiveness of the use of tickets in inspections, we begin by looking at this development in the broader context of employment standards enforcement and its historical trajectory. Then, using administrative data from the Ministry of Labour, we examine when and why tickets are issued in the course of workplace inspections. After demonstrating that even when ticketable violations are detected, tickets are issued only rarely, we explore factors associated with an increased likelihood of an inspector issuing a ticket. Finally, we consider how the overall deterrent effect of workplace inspections is influenced by the use or non-use of deterrence tools.
Researchers studying migration and development have argued over the potential that migration and associated remittances have to improve the economic and social conditions in origin communities. Past research on migration from indigenous communities in Oaxaca has similarly questioned the compatibility of traditional governance systems with high migration rates. We argue, using evidence from four Zapotec communities in rural Oaxaca, that communities can use the organizational capacity of traditional governance systems to access remittances from migrants for the benefit of the community as a whole. Communities can require payment from migrants in lieu of communal labor requirements (tequio) and may directly solicit remittances from migrants for community projects. The extent to which they enforce these requests depends on the existing organizational strength in the community. These findings imply that strong forms of community organization can make the difference between migration contributing to underdevelopment and migration contributing to development.
Psychotic-like experiences (PLEs) are risk factors for the development of psychiatric conditions like schizophrenia, particularly if associated with distress. As PLEs have been related to alterations in both white matter and cognition, we investigated whether cognition (g-factor and processing speed) mediates the relationship between white matter and PLEs.
Methods
We investigated two independent samples (6170 and 19 891) from the UK Biobank, through path analysis. For both samples, measures of whole-brain fractional anisotropy (gFA) and mean diffusivity (gMD), as indications of white matter microstructure, were derived from probabilistic tractography. For the smaller sample, variables whole-brain white matter network efficiency and microstructure were also derived from structural connectome data.
Results
The mediation of cognition on the relationships between white matter properties and PLEs was non-significant. However, lower gFA was associated with having PLEs in combination with distress in the full available sample (standardized β = −0.053, p = 0.011). Additionally, lower gFA/higher gMD was associated with lower g-factor (standardized β = 0.049, p < 0.001; standardized β = −0.027, p = 0.003), and partially mediated by processing speed with a proportion mediated of 7% (p = < 0.001) for gFA and 11% (p < 0.001) for gMD.
Conclusions
We show that lower global white matter microstructure is associated with having PLEs in combination with distress, which suggests a direction of future research that could help clarify how and why individuals progress from subclinical to clinical psychotic symptoms. Furthermore, we replicated that processing speed mediates the relationship between white matter microstructure and g-factor.
Five international consensus statements on concussion in sports have been published. This commentary argues that there is a strong need for a new approach to them that foregrounds public health expertise and patient-centered guidance. Doing so will help players, parents and practitioners keep perspective about these potentially life-altering injuries especially when they recur.
In this matched case–control study, we sought to determined the association between probiotic use and invasive infections caused by typical probiotic organisms. The odds of probiotic use in cases were 127 times the odds of probiotic use in controls (95% CI, 6.21–2600). Further research into these rare but severe complications is needed.
Depression is strongly associated with chronic disease; yet, the direction of this relationship is poorly understood. Allostatic load (AL) provides a framework for elucidating depression-disease pathways. We aimed to investigate bidirectional, longitudinal associations of baseline depressive symptoms or AL with 5-year AL or depressive symptoms, respectively.
Methods
Data were from baseline, 2-year, and 5-year visits of 620 adults (45–75 years) enrolled in the Boston Puerto Rican Health Study. The Center for Epidemiology Studies Depression (CES-D) scale (0–60) captured depressive symptoms, which were categorized at baseline as low (<8), subthreshold (8–15), or depression-likely (⩾16) symptoms. AL was calculated from 11 parameters of biological functioning, representing five physiological systems. Baseline AL scores were categorized by the number of dysregulated parameters: low (0–2), moderate (3–5), or high (⩾6) AL. Multivariable, multilevel random intercept and slope linear regression models were used to examine associations between 3-category baseline CES-D score and 5-year continuous AL score, and between baseline 3-category AL and 5-year continuous CES-D score.
Results
Baseline subthreshold depressive symptoms [(mean (95% CI)): 4.8 (4.5–5.2)], but not depression-likely symptoms [4.5 (4.2–4.9)], was significantly associated with higher 5-year AL scores, compared to low depressive symptoms [4.3 (3.9–4.7)]. Baseline high AL [19.4 (17.6–21.2)], but not low AL [18.5 (16.5–20.6)], was significantly associated with higher 5-year CES-D score, compared to baseline moderate AL [16.9 (15.3–18.5)].
Conclusions
Depressive symptoms and AL had a bi-directional relationship over time, indicating a nuanced pathway linking depression with chronic diseases among a minority population.
Whole-grain wheat, in particular coloured varieties, may have health benefits in adults with chronic metabolic disease risk factors. Twenty-nine overweight and obese adults with chronic inflammation (high-sensitivity C-reactive protein) > 1·0 mg/l) replaced four daily servings of refined grain food products with bran-enriched purple or regular whole-wheat convenience bars (approximately 41–45 g fibre, daily) for 8 weeks in a randomised, single-blind parallel-arm study where body weight was maintained. Anthropometrics, blood markers of inflammation, oxidative stress, and lipaemia and metabolites of anthocyanins and phenolic acids were compared at days 1, 29 and 57 using repeated-measures ANOVA within groups and ANCOVA between groups at day 57, with day 1 as a covariate. A significant reduction in IL-6 and increase in adiponectin were observed within the purple wheat (PW) group. TNF-α was lowered in both groups and ferulic acid concentration increased in the regular wheat (RW) group. Comparing between wheats, only plasma TNF-α and glucose differed significantly (P < 0·05), that is, TNF-α and glucose decreased with RW and PW, respectively. Consumption of PW or RW products showed potential to improve plasma markers of inflammation and oxidative stress in participants with evidence of chronic inflammation, with modest differences observed based on type of wheat.