We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Flowering rush (Butomus umbellatus L.) is an emergent perennial monocot that has invaded aquatic systems along the U.S.–Canadian border. Currently, there are two known cytotypes of flowering rush, diploid and triploid, within the invaded range. Although most studies have focused on the triploid cytotype, little information is known about diploid plants. Therefore, phenology and resource allocation were studied on the diploid cytotype of flowering rush in three study sites (Mentor Marsh, OH; Tonawanda Wildlife Management Area, NY; and Unity Island, NY) to understand seasonal resource allocation and environmental influences on growth, and to optimize management strategies. Samples were harvested once a month from May to November at each site from 2021 to 2023. Plant metrics were regressed to air temperature, water temperature, and water depth. Aboveground biomass peaked from July to September and comprised 50% to 70% of total biomass. Rhizome biomass peaked from September to November and comprised 40% to 50% of total biomass. Rhizome bulbil densities peaked from September to November at 3,000 to 16,000 rhizome bulbils m−2. Regression analysis resulted in strong negative relationships between rhizome starch content and air temperature (r2 = 0.52) and water temperature (r2 = 46). Other significant, though weak, relationships were found, including a positive relationship between aboveground biomass and air temperature (r2 = 0.17), a negative relationship between rhizome bulbil biomass and air temperature (r2 = 0.18) and a positive relationship between leaf density and air temperature (r2 = 0.17). Rhizomes and rhizome bulbils combined stored up to 60% of total starch, and therefore, present a unique challenge to management, as these structures cannot be reached directly with herbicides. Therefore, management should target the aboveground tissue before peak production (July) to reduce internal starch storage and aim to limit regrowth over several years.
Major depressive disorder (MDD) is the leading cause of disability globally, with moderate heritability and well-established socio-environmental risk factors. Genetic studies have been mostly restricted to European settings, with polygenic scores (PGS) demonstrating low portability across diverse global populations.
Methods
This study examines genetic architecture, polygenic prediction, and socio-environmental correlates of MDD in a family-based sample of 10 032 individuals from Nepal with array genotyping data. We used genome-based restricted maximum likelihood to estimate heritability, applied S-LDXR to estimate the cross-ancestry genetic correlation between Nepalese and European samples, and modeled PGS trained on a GWAS meta-analysis of European and East Asian ancestry samples.
Results
We estimated the narrow-sense heritability of lifetime MDD in Nepal to be 0.26 (95% CI 0.18–0.34, p = 8.5 × 10−6). Our analysis was underpowered to estimate the cross-ancestry genetic correlation (rg = 0.26, 95% CI −0.29 to 0.81). MDD risk was associated with higher age (beta = 0.071, 95% CI 0.06–0.08), female sex (beta = 0.160, 95% CI 0.15–0.17), and childhood exposure to potentially traumatic events (beta = 0.050, 95% CI 0.03–0.07), while neither the depression PGS (beta = 0.004, 95% CI −0.004 to 0.01) or its interaction with childhood trauma (beta = 0.007, 95% CI −0.01 to 0.03) were strongly associated with MDD.
Conclusions
Estimates of lifetime MDD heritability in this Nepalese sample were similar to previous European ancestry samples, but PGS trained on European data did not predict MDD in this sample. This may be due to differences in ancestry-linked causal variants, differences in depression phenotyping between the training and target data, or setting-specific environmental factors that modulate genetic effects. Additional research among under-represented global populations will ensure equitable translation of genomic findings.
This retrospective cohort study examined prosocial skills development in child welfare-involved children, how intimate partner violence (IPV) exposure explained heterogeneity in children’s trajectories of prosocial skill development, and the degree to which protective factors across children’s ecologies promoted prosocial skill development. Data were from 1,678 children from the National Survey of Child and Adolescent Well-being I, collected between 1999 and 2007. Cohort-sequential growth mixture models were estimated to identify patterns of prosocial skill development between the ages of 3 to 10 years. Four diverse pathways were identified, including two groups that started high (high subtle-decreasing; high decreasing-to-increasing) and two groups that started low (low stable; low increasing-to-decreasing). Children with prior history of child welfare involvement, preschool-age IPV exposure, school-age IPV exposure, or family income below the federal poverty level had higher odds of being in the high decreasing-to-increasing group compared with the high subtle-decreasing group. Children with a mother with greater than high school education or higher maternal responsiveness had higher odds of being in the low increasing-to-decreasing group compared with the low stable group. The importance of maternal responsiveness in fostering prosocial skill development underlines the need for further assessment and intervention. Recommendations for clinical assessment and parenting programs are provided.
Profiling patients on a proposed ‘immunometabolic depression’ (IMD) dimension, described as a cluster of atypical depressive symptoms related to energy regulation and immunometabolic dysregulations, may optimise personalised treatment.
Aims
To test the hypothesis that baseline IMD features predict poorer treatment outcomes with antidepressants.
Method
Data on 2551 individuals with depression across the iSPOT-D (n = 967), CO-MED (n = 665), GENDEP (n = 773) and EMBARC (n = 146) clinical trials were used. Predictors included baseline severity of atypical energy-related symptoms (AES), body mass index (BMI) and C-reactive protein levels (CRP, three trials only) separately and aggregated into an IMD index. Mixed models on the primary outcome (change in depressive symptom severity) and logistic regressions on secondary outcomes (response and remission) were conducted for the individual trial data-sets and pooled using random-effects meta-analyses.
Results
Although AES severity and BMI did not predict changes in depressive symptom severity, higher baseline CRP predicted smaller reductions in depressive symptoms (n = 376, βpooled = 0.06, P = 0.049, 95% CI 0.0001–0.12, I2 = 3.61%); this was also found for an IMD index combining these features (n = 372, βpooled = 0.12, s.e. = 0.12, P = 0.031, 95% CI 0.01–0.22, I2= 23.91%), with a higher – but still small – effect size compared with CRP. Confining analyses to selective serotonin reuptake inhibitor users indicated larger effects of CRP (βpooled = 0.16) and the IMD index (βpooled = 0.20). Baseline IMD features, both separately and combined, did not predict response or remission.
Conclusions
Depressive symptoms of people with more IMD features improved less when treated with antidepressants. However, clinical relevance is limited owing to small effect sizes in inconsistent associations. Whether these patients would benefit more from treatments targeting immunometabolic pathways remains to be investigated.
Traumatic brain injury is one of several recognized risk factors for cognitive decline and neurodegenerative disease. Currently, risk scores involving modifiable risk/protective factors for dementia have not incorporated head injury history as part of their overall weighted risk calculation. We investigated the association between the LIfestyle for BRAin Health (LIBRA) risk score with odds of mild cognitive impairment (MCI) diagnosis and cognitive function in older former National Football League (NFL) players, both with and without the influence of concussion history.
Participants and Methods:
Former NFL players, ages ≥ 50 (N=1050; mean age=61.1±5.4-years), completed a general health survey including self-reported medical history and ratings of function across several domains. LIBRA factors (weighted value) included cardiovascular disease (+1.0), hypertension (+1.6), hyperlipidemia (+1.4), diabetes (+1.3), kidney disease (+1.1), cigarette use history (+1.5), obesity (+1.6), depression (+2.1), social/cognitive activity (-3.2), physical inactivity (+1.1), low/moderate alcohol use (-1.0), healthy diet (-1.7). Within Group 1 (n=761), logistic regression models assessed the association of LIBRA scores and independent contribution of concussion history with the odds of MCI diagnosis. A modified-LIBRA score incorporated concussion history at the level planned contrasts showed significant associations across concussion history groups (0, 1-2, 3-5, 6-9, 10+). The weighted value for concussion history (+1.9) within the modified-LIBRA score was based on its proportional contribution to dementia relative to other LIBRA risk factors, as proposed by the 2020 Lancet Commission Report on Dementia Prevention. Associations of the modified-LIBRA score with odds of MCI and cognitive function were assessed via logistic and linear regression, respectively, in a subset of the sample (Group 2; n=289) who also completed the Brief Test of Adult Cognition by Telephone (BTACT). Race was included as a covariate in all models.
Results:
The median LIBRA score in the Group 1 was 1.6(IQR= -1, 3.6). Standard and modified-LIBRA median scores were 1.1(IQR= -1.3, 3.3) and 2(IQR= -0.4, 4.6), respectively, within Group 2. In Group 1, LIBRA score was significantly associated with odds of MCI diagnosis (odds ratio[95% confidence interval]=1.27[1.19, 1.28], p <.001). Concussion history provided additional information beyond LIBRA scores and was independently associated with odds of MCI; specifically, odds of MCI were higher among those with 6-9 (Odds Ratio[95% confidence interval]; OR=2.54[1.21, 5.32], p<.001), and 10+ (OR=4.55;[2.21, 9.36], p<.001) concussions, compared with those with no prior concussions. Within Group 2, the modified-LIBRA score was associated with higher odds of MCI (OR=1.61[1.15, 2.25]), and incrementally improved model information (0.04 increase in Nagelkerke R2) above standard LIBRA scores in the same model. Modified-LIBRA scores were inversely associated with BTACT Executive Function (B=-0.53[0.08], p=.002) and Episodic Memory scores (B=-0.53[0.08], p=.002).
Conclusions:
Numerous modifiable risk/protective factors for dementia are reported in former professional football players, but incorporating concussion history may aid the multifactorial appraisal of cognitive decline risk and identification of areas for prevention and intervention. Integration of multi-modal biomarkers will advance this person-centered, holistic approach toward dementia reduction, detection, and intervention.
Traumatic brain injury and cardiovascular disease (CVD) are modifiable risk factors for cognitive decline and dementia. Greater concussion history can potentially increase risk for cerebrovascular changes associated with cognitive decline and may compound effects of CVD. We investigated the independent and dynamic effects of CVD/risk factor burden and concussion history on cognitive function and odds of mild cognitive impairment (MCI) diagnoses in older former National Football League (NFL) players.
Participants and Methods:
Former NFL players, ages 50-70 (N=289; mean age=61.02±5.33 years), reported medical history and completed the Brief Test of Adult Cognition by Telephone (BTACT). CVD/risk factor burden was characterized as ordinal (0-3+) based on the sum of the following conditions: coronary artery disease/myocardial infarction, chronic obstructive pulmonary disease, hypertension, hyperlipidemia, sleep apnea, type-I and II diabetes. Cognitive outcomes included BTACT Executive Function and Episodic Memory Composite Z-scores (standardized on age- and education-based normative data), and the presence of physician diagnosed (self-reported) MCI. Concussion history was discretized into five groups: 0, 1-2, 3-5, 6-9, 10+. Linear and logistic regression models were fit to test independent and joint effects of concussion history and CVD burden on cognitive outcomes and odds of MCI. Race (dichotomized as White and Non-white due to sample distribution) was included in models as a covariate.
Results:
Greater CVD burden (unstandardized beta [standard error]; B=-0.10[0.42], p=.013, and race (B=0.622[0.09], p<.001), were associated with lower executive functioning. Compared to those with 0 prior concussions, no significant differences were observed for those with 1-2, 3-5, 6-9, or 10+ prior concussions (ps >.05). Race (B=0.61[.13], p<.001), but not concussion history or CVD burden, was associated with episodic memory. There was a trend for lower episodic memory scores among those with 10+ prior concussion compared to those with no prior concussions (B=-0.49[.25], p=.052). There were no significant differences in episodic memory among those with 1-2, 3-5, or 6-9 prior concussions compared to those with 0 prior concussions (ps>.05). CVD burden (B=0.35[.13], p=.008), race (greater odds in Non-white group; B=0.82[.29], p=.005), and greater concussion history (higher odds of diagnosis in 10+ group compared to those with 0 prior concussions; B=2.19[0.78], p<.005) were associated with higher odds of MCI diagnosis. Significant interaction effects between concussion history and CVD burden were not observed for any outcome (ps >.05).
Conclusions:
Lower executive functioning and higher odds of MCI diagnosis were associated with higher CVD burden and race. Very high concussion history (10+) was selectively associated with higher odds of MCI diagnosis. Reduction of these modifiable factors may mitigate adverse outcomes in older contact sport athletes. In former athletes, consideration of CVD burden is particularly pertinent when assessing executive dysfunction, considered to be a common cognitive feature of traumatic encephalopathy syndrome, as designated by the recent diagnostic criteria. Further research should investigate the social and structural determinants contributing to racial disparities in long-term health outcomes within former NFL players.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
Prior trials suggest that intravenous racemic ketamine is a highly effective for treatment-resistant depression (TRD), but phase 3 trials of racemic ketamine are needed.
Aims
To assess the acute efficacy and safety of a 4-week course of subcutaneous racemic ketamine in participants with TRD. Trial registration: ACTRN12616001096448 at www.anzctr.org.au.
Method
This phase 3, double-blind, randomised, active-controlled multicentre trial was conducted at seven mood disorders centres in Australia and New Zealand. Participants received twice-weekly subcutaneous racemic ketamine or midazolam for 4 weeks. Initially, the trial tested fixed-dose ketamine 0.5 mg/kg versus midazolam 0.025 mg/kg (cohort 1). Dosing was revised, after a Data Safety Monitoring Board recommendation, to flexible-dose ketamine 0.5–0.9 mg/kg or midazolam 0.025–0.045 mg/kg, with response-guided dosing increments (cohort 2). The primary outcome was remission (Montgomery-Åsberg Rating Scale for Depression score ≤10) at the end of week 4.
Results
The final analysis (those who received at least one treatment) comprised 68 in cohort 1 (fixed-dose), 106 in cohort 2 (flexible-dose). Ketamine was more efficacious than midazolam in cohort 2 (remission rate 19.6% v. 2.0%; OR = 12.1, 95% CI 2.1–69.2, P = 0.005), but not different in cohort 1 (remission rate 6.3% v. 8.8%; OR = 1.3, 95% CI 0.2–8.2, P = 0.76). Ketamine was well tolerated. Acute adverse effects (psychotomimetic, blood pressure increases) resolved within 2 h.
Conclusions
Adequately dosed subcutaneous racemic ketamine was efficacious and safe in treating TRD over a 4-week treatment period. The subcutaneous route is practical and feasible.
Background: Sex differences in treatment response to intravenous thrombolysis (IVT) are poorly characterized. We compared sex-disaggregated outcomes in patients receiving IVT for acute ischemic stroke in the Alteplase compared to Tenecteplase (AcT) trial, a Canadian multicentre, randomised trial. Methods: In this post-hoc analysis, the primary outcome was excellent functional outcome (modified Rankin Score [mRS] 0-1) at 90 days. Secondary and safety outcomes included return to baseline function, successful reperfusion (eTICI≥2b), death and symptomatic intracerebral hemorrhage. Results: Of 1577 patients, there were 755 women and 822 men (median age 77 [68-86]; 70 [59-79]). There were no differences in rates of mRS 0-1 (aRR 0.95 [0.86-1.06]), return to baseline function (aRR 0.94 [0.84-1.06]), reperfusion (aRR 0.98 [0.80-1.19]) and death (aRR 0.91 [0.79-1.18]). There was no effect modification by treatment type on the association between sex and outcomes. The probability of excellent functional outcome decreased with increasing onset-to-needle time. This relation did not vary by sex (pinteraction 0.42). Conclusions: The AcT trial demonstrated comparable functional, safety and angiographic outcomes by sex. This effect did not differ between alteplase and tenecteplase. The pragmatic enrolment and broad national participation in AcT provide reassurance that there do not appear to be sex differences in outcomes amongst Canadians receiving IVT.
Settlement scaling theory predicts that higher site densities lead to increased social interactions that, in turn, boost productivity. The scaling relationship between population and land area holds for several ancient societies, but as demonstrated by the sample of 48 sites in this study, it does not hold for the Northern Maya Lowlands. Removing smaller sites from the sample brings the results closer to scaling expectations. We argue that applications of scaling theory benefit by considering social interaction as a product not only of proximity but also of daily life and spatial layouts.
In difficult-to-treat depression (DTD) the outcome metrics historically used to evaluate treatment effectiveness may be suboptimal. Metrics based on remission status and on single end-point (SEP) assessment may be problematic given infrequent symptom remission, temporal instability, and poor durability of benefit in DTD.
Methods
Self-report and clinician assessment of depression symptom severity were regularly obtained over a 2-year period in a chronic and highly treatment-resistant registry sample (N = 406) receiving treatment as usual, with or without vagus nerve stimulation. Twenty alternative metrics for characterizing symptomatic improvement were evaluated, contrasting SEP metrics with integrative (INT) metrics that aggregated information over time. Metrics were compared in effect size and discriminating power when contrasting groups that did (N = 153) and did not (N = 253) achieve a threshold level of improvement in end-point quality-of-life (QoL) scores, and in their association with continuous QoL scores.
Results
Metrics based on remission status had smaller effect size and poorer discrimination of the binary QoL outcome and weaker associations with the continuous end-point QoL scores than metrics based on partial response or response. The metrics with the strongest performance characteristics were the SEP measure of percentage change in symptom severity and the INT metric quantifying the proportion of the observation period in partial response or better. Both metrics contributed independent variance when predicting end-point QoL scores.
Conclusions
Revision is needed in the metrics used to quantify symptomatic change in DTD with consideration of INT time-based measures as primary or secondary outcomes. Metrics based on remission status may not be useful.
Recent research has shown that risk and reward are positively correlated in many environments, and that people have internalized this association as a “risk-reward heuristic”: when making choices based on incomplete information, people infer probabilities from payoffs and vice-versa, and these inferences shape their decisions. We extend this work by examining people’s expectations about another fundamental trade-off — that between monetary reward and delay. In 2 experiments (total N = 670), we adapted a paradigm previously used to demonstrate the risk-reward heuristic. We presented participants with intertemporal choice tasks in which either the delayed reward or the length of the delay was obscured. Participants inferred larger rewards for longer stated delays, and longer delays for larger stated rewards; these inferences also predicted people’s willingness to take the delayed option. In exploratory analyses, we found that older participants inferred longer delays and smaller rewards than did younger ones. All of these results replicated in 2 large-scale pre-registered studies with participants from a different population (total N = 2138). Our results suggest that people expect intertemporal choice tasks to offer a trade-off between delay and reward, and differ in their expectations about this trade-off. This “delay-reward heuristic” offers a new perspective on existing models of intertemporal choice and provides new insights into unexplained and systematic individual differences in the willingness to delay gratification.
The purpose of this investigation was to expand upon the limited existing research examining the test–retest reliability, cross-sectional validity and longitudinal validity of a sample of bioelectrical impedance analysis (BIA) devices as compared with a laboratory four-compartment (4C) model. Seventy-three healthy participants aged 19–50 years were assessed by each of fifteen BIA devices, with resulting body fat percentage estimates compared with a 4C model utilising air displacement plethysmography, dual-energy X-ray absorptiometry and bioimpedance spectroscopy. A subset of thirty-seven participants returned for a second visit 12–16 weeks later and were included in an analysis of longitudinal validity. The sample of devices included fourteen consumer-grade and one research-grade model in a variety of configurations: hand-to-hand, foot-to-foot and bilateral hand-to-foot (octapolar). BIA devices demonstrated high reliability, with precision error ranging from 0·0 to 0·49 %. Cross-sectional validity varied, with constant error relative to the 4C model ranging from −3·5 (sd 4·1) % to 11·7 (sd 4·7) %, standard error of the estimate values of 3·1–7·5 % and Lin’s concordance correlation coefficients (CCC) of 0·48–0·94. For longitudinal validity, constant error ranged from −0·4 (sd 2·1) % to 1·3 (sd 2·7) %, with standard error of the estimate values of 1·7–2·6 % and Lin’s CCC of 0·37–0·78. While performance varied widely across the sample investigated, select models of BIA devices (particularly octapolar and select foot-to-foot devices) may hold potential utility for the tracking of body composition over time, particularly in contexts in which the purchase or use of a research-grade device is infeasible.
Non-archosaur archosauromorphs are a paraphyletic group of diapsid reptiles that were important members of global Middle and Late Triassic continental ecosystems. Included in this group are the azendohsaurids, a clade of allokotosaurians (kuehneosaurids and Azendohsauridae + Trilophosauridae) that retain the plesiomorphic archosauromorph postcranial body plan but evolved disparate cranial features that converge on later dinosaurian anatomy, including sauropodomorph-like marginal dentition and ceratopsian-like postorbital horns. Here we describe a new malerisaurine azendohsaurid from two monodominant bonebeds in the Blue Mesa Member, Chinle Formation (Late Triassic, ca. 218–220 Ma); the first occurs at Petrified Forest National Park and preserves a minimum of eight individuals of varying sizes, and the second occurs near St. Johns, Arizona. Puercosuchus traverorum n. gen. n. sp. is a carnivorous malerisaurine that is closely related to Malerisaurus robinsonae from the Maleri Formation of India and to Malerisaurus langstoni from the Dockum Group of western Texas. Dentigerous elements from Puercosuchus traverorum n. gen. n. sp. confirm that some Late Triassic tooth morphotypes thought to represent early dinosaurs cannot be differentiated from, and likely pertain to, Puercosuchus-like malerisaurine taxa. These bonebeds from northern Arizona support the hypothesis that non-archosauriform archosauromorphs were locally diverse near the middle Norian and experienced an extinction event prior to the end-Triassic mass extinction coincidental with the Adamanian-Revueltian boundary recognized at Petrified Forest National Park. The relatively late age of this early-diverging taxon (Norian) suggests that the diversity of azendohsaurids is underrepresented in Middle and Late Triassic fossil records around the world.
Placement of fertilizer in the seed furrow to increase nutrient availability is a common practice in row-crop production. While in-furrow application of fertilizer is widely utilized in the production of winter wheat (Triticum aestivum L.), there is a lack of work evaluating new formulations and nutrient combinations that are available. The objective of this study was to quantify the effects of in-furrow fertilizer products and combinations of products on winter wheat grain yield, nitrogen and mineral concentrations. Trials were conducted across five site-years in central Oklahoma using 11 fertilizer formulations placed in-furrow at the time of planting. In locations that soil test phosphorus (STP) levels or potassium were above sufficiency, the use of in-furrow fertilizers did not improve yield over the control. Inconsistency of response was noted at locations where STP levels were below the critical threshold. While one location showed no response to the addition of P regardless of source, two other locations had significant yield responses from three or more P-containing fertilizers. The addition of both sulphur and zinc resulted in increased yield over the base product at one low STP location. Nutrient concentrations were also influenced in nutrient-limited soils; however, no trends in response were present. Based upon the results of this study, the application of in-furrow fertilizer has the potential to increase winter wheat grain yield and nutrient concentration, when soil nutrients are limiting. As expected the addition of fertilizer when soil test levels are at or above a sufficiency did not increase grain yield.
This chapter presents a comprehensive review of the interaction between circum-Caribbean indigenous peoples and nonhuman primates before and at early European contact. It fills significant gaps in contemporary scholarly literature by providing an updated archaeological history of the social and symbolic roles of monkeys in this region. We begin by describing the zooarchaeological record of primates in the insular and coastal circum-Caribbean Ceramic period archaeological sites. Drawing from the latest archaeological investigations that use novel methods and techniques, we also review other biological evidence of the presence of monkeys. In addition, we compile a list of indigenously crafted portable material imagery and review rock art that allegedly depicts primates in the Caribbean. Our investigation is supplemented by the inclusion of written documentary sources, specifically, ethnoprimatological information derived from early ethnohistorical sources on the multifarious interactions between humans and monkeys in early colonial societies. Finally, we illustrate certain patterns that may have characterized interactions between humans and monkeys in past societies of the circum-Caribbean region (300–1500 CE), opening avenues for future investigations of this topic.
Keywords:
Archaeoprimatology, Ceramic period, Greater and Lesser Antilles, Island and coastal archaeology, Saladoid, Taíno, Trinidad, Venezuela
Background: Idiopathic Normal Pressure Hydrocephalus (iNPH) is a disorder of the elderly with progressive worsening of gait and balance, cognition, and urinary control which requires assessment using criteria recommended by International iNPH guidelines. Methods: Adult Hydrocephalus Clinical Research Network (AHCRN) prospective registry data from 5-centers over a 50-month interval included entry criteria; demographics; comorbidities; examination findings using standard AHCRN gait and neuropsychology assessments; shunt procedures, complications of CSF drainage, complications within 30 days of surgery, and 1-year postoperative follow-up. Results: 547 patients were referred for assessment of suspected-iNPH. 123 patients(21.6%) did not meet clinical criteria to proceed with further testing. 424 patients(74.4%;mean age 76.7 ± 6.0 years;males=269) underwent an LP or lumbar drain, and 193(45.6%) underwent insertion of a ventriculoperitoneal shunt. By 8-12 months after shunt surgery, gait velocity was 0.96±0.35m/s (54% faster than pre-CSF-drainage). Mean MoCA scores increased from 21.0 ± 5.0(median=22.0) at baseline to 22.6±5.5(median=24) 12-months post-surgery. Gait and cognitive improvements were clinically significant. No deaths occurred. 8% of shunt-surgery patients experienced minor complications. The 30-day reoperation rate was 4.1%. Conclusions: This AHCRN study demonstrated that CSF-drainage testing of patients with suspected-iNPH successfully identified those who could undergo CSF-shunt surgery with a high rate of improvement and a low rate of complications.
Bloodstream infections (BSIs) are a frequent cause of morbidity in patients with acute myeloid leukemia (AML), due in part to the presence of central venous access devices (CVADs) required to deliver therapy.
Objective:
To determine the differential risk of bacterial BSI during neutropenia by CVAD type in pediatric patients with AML.
Methods:
We performed a secondary analysis in a cohort of 560 pediatric patients (1,828 chemotherapy courses) receiving frontline AML chemotherapy at 17 US centers. The exposure was CVAD type at course start: tunneled externalized catheter (TEC), peripherally inserted central catheter (PICC), or totally implanted catheter (TIC). The primary outcome was course-specific incident bacterial BSI; secondary outcomes included mucosal barrier injury (MBI)-BSI and non-MBI BSI. Poisson regression was used to compute adjusted rate ratios comparing BSI occurrence during neutropenia by line type, controlling for demographic, clinical, and hospital-level characteristics.
Results:
The rate of BSI did not differ by CVAD type: 11 BSIs per 1,000 neutropenic days for TECs, 13.7 for PICCs, and 10.7 for TICs. After adjustment, there was no statistically significant association between CVAD type and BSI: PICC incident rate ratio [IRR] = 1.00 (95% confidence interval [CI], 0.75–1.32) and TIC IRR = 0.83 (95% CI, 0.49–1.41) compared to TEC. When MBI and non-MBI were examined separately, results were similar.
Conclusions:
In this large, multicenter cohort of pediatric AML patients, we found no difference in the rate of BSI during neutropenia by CVAD type. This may be due to a risk-profile for BSI that is unique to AML patients.