We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Termination of an existing failed corn stand before replanting is essential. Two studies were conducted in Stoneville and Verona, MS, from 2020 to 2021 to evaluate timing of corn or soybean replanting following different herbicide treatments applied to simulated failed stands of corn. Treatments included paraquat alone at 841 g ai ha−1, paraquat at 841 g ha−1 + metribuzin at 211 g ai ha−1, and clethodim at 51 g ai ha−1 + glyphosate at 1,121 g ae ha−1 applied at the V2 growth stage. Replant timings were 1 and 7 d after herbicide treatment (DAT). Pooled across replant timings, paraquat + metribuzin provided the greatest control 3 DAT compared with other treatments in both studies. At 14 and 21 DAT, clethodim + glyphosate controlled more corn than did paraquat + metribuzin and paraquat alone. Control of a simulated failed corn stand with paraquat alone never exceeded 50% at 3 to 21 DAT. Soybean yield in all plots receiving herbicide treatment targeting simulated failed corn stands were similar and ≥2,150 kg ha−1. When applied at the V2 corn growth stage, both clethodim + glyphosate and paraquat + metribuzin controlled a simulated failed stand of corn. This study demonstrated the importance of terminating failed stands of corn before replanting because of dramatic reductions in yield in the plots not treated with herbicide.
Florpyrauxifen-benzyl is a postemergence rice herbicide that has reduced rice yield in some situations, and producers are concerned that the impact could be even greater with low rice seeding densities. Therefore, research was conducted in Stoneville, MS, from 2019 to 2021, to evaluate the effect of florpyrauxifen-benzyl on rice yield when a hybrid was seeded at reduced densities. Rice cultivar FullPage RT 7521 FP was seeded at 10, 17, 24, 30, and 37 kg ha−1. At the 4-leaf to 1-tiller growth stage, florpyrauxifen-benzyl was applied at 0 or 58 g ai ha−1. Rice injury following application of florpyrauxifen-benzyl was ≤8% across all seeding rates and evaluation intervals. Application of florpyrauxifen-benzyl reduced plant heights by 14% to all seeding rates but did not result in delayed rice maturity. When florpyrauxifen-benzyl was not applied to rice that was seeded at 10 and 17 kg ha−1 seeding rates, rice matured slower than when it was seeded at 24, 30, and 37 kg ha−1. When florpyrauxifen-benzyl was applied, rough rice grain yields were reduced by at the 17 and 37 kg ha−1 seeding rates, but not at any other seeding rate. In conclusion, application of florpyrauxifen-benzyl at a 2× rate can cause a loss of yield resulting from variation in rice densities.
Objectives/Goals: Cerebral amyloid angiopathy (CAA) characterized by the accumulation of amyloid-beta in the cerebrovasculature, affects blood vessel integrity leading to brain hemorrhages and an accelerated cognitive decline in Alzheimer’s disease patients. In this study, we are conducting a genome-wide association study to identify genetic risk factors for CAA. Methods/Study Population: We genotyped 1253 additional AD cases using and curated existing genome-wide genotype data from 110 AD and 502 non-AD donors from the Mayo Clinic Brain Bank. We performed QC and imputation of all datasets. We conducted GWAS in AD only (N = 1,363), non-AD only, as well as the combined cohort (N = 1,865) by testing imputed variant dosages for association with square root transformed CAA using linear regression, adjusting for relevant covariates. To assess associations in the context of major CAA risk factors, we performed interaction analysis with APOEe4 presence and sex; and pursued stratified analyses. We collected peripheral gene expression measures using RNA isolated from 188 PAXgene tube samples of 95 donors collected across multiple time points. More than 1/3 of these participants have MRI measures collected. Results/Anticipated Results: Variants at the APOE locus were identified as the most significant in our study. In addition, several other variants with suggestive association were found under the main model adjusting for AD neuropathology (Braak and Thal). LINC-PINT splice variant remained associated with lower CAA scores in AD cases without the APOEe4 risk allele. To enhance the robustness of our findings, we are pursuing further expansion of our study cohort. In the periphery, we expect to identify expression changes associated with neuroimaging indicators of CAA and determine if variants and genes discovered via GWAS are implicated in these changes. Discussion/Significance of Impact: We expect this study will provide further insight into the genetic architecture underlying risk for CAA both in the context of significant AD pathology and without. Characterization of genetic variants and functional outcomes in the context of neuropathology may lead to new avenues of research aimed at identifying biomarkers and therapies to treat CAA
Florpyrauxifen-benzyl was commercialized in 2018 to target barnyardgrass and aquatic or broadleaf weeds. Field studies were conducted from 2019 to 2021 in Stoneville, MS, to evaluate barnyardgrass control following a simulated failure of florpyrauxifen-benzyl or other common postemergence rice herbicides. In the first field study, florpyrauxifen-benzyl was applied at 0 and 15 g ai ha–1 to rice at the two- to three-leaf stage to simulate a failed application targeting barnyardgrass. Sequential herbicide treatments included no herbicide and full rates of imazethapyr, quinclorac, bispyribac-Na, and cyhalofop applied 7 or 14 d after florpyrauxifen-benzyl treatment. The second field study was designed to evaluate barnyardgrass control with florpyrauxifen-benzyl following simulated failure of postemergence rice herbicides. Initial herbicide treatments included no herbicide and half rates of imazethapyr, quinclorac, bispyribac-Na, and propanil. Sequential applications at 7 or 14 d after the initial herbicide treatments included florpyrauxifen-benzyl at 0 and 30 g ai ha–1. Results from the first study indicated barnyardgrass control 21 d after final treatment (DAFT) was greater with sequential treatments at 7 compared with 14 d after initial treatment (DA-I) with no initial application of florpyrauxifen-benzyl. Therefore, delaying sequential treatments until 14 d after initial florpyrauxifen-benzyl at 15 g ha–1 allowed barnyardgrass to become too large to control with other rice herbicides. Rough rice yield was reduced in plots where quinclorac application was delayed from 7 to 14 DA-I with no initial application of florpyrauxifen-benzyl. The second study suggested that florpyrauxifen-benzyl application should be delayed 14 d after a herbicide failure. Although no differences in barnyardgrass control 21 DAFT were detected whether florpyrauxifen-benzyl was applied 7 or 14 DA-I of any herbicide utilized, >85% control was only achieved when florpyrauxifen-benzyl application was delayed 14 DA-I. These results demonstrate barnyardgrass control options following simulated failed applications of common rice herbicides.
Prion disease is a rare, invariably fatal neurodegenerative disease characterized by rapid neuronal degeneration; Mutations to PRNP gene cause genetic prion disease (GPD). In animal models, microglial activation, astrocytosis, and release of neurofilament precede the onset of frank symptoms (Sorce & Nuvolone 2020, Minikel 2020). In humans at risk for GPD, prodromal pathology appears to occur in only a brief window prior to symptom onset (Vallabh et al. 2020, Thompson et al. 2021), but some data suggest that known PRNP mutation carriers may exhibit cognitive abnormalities prior to meeting clinical diagnostic criteria (Mole et al., 2021). We aim to examine pre-symptomatic differences in cognitive processing speed (CPS) and executive function (EF) in PRNP mutation carriers and controls.
Participants and Methods:
Our sample includes two groups from an ongoing observational study on GPD (Vallabh et al., 2020): known PRNP mutation carriers (N = 32, Age M = 45.77, SD = 14.75) and control group of non-carriers with a family history of GPD and healthy controls with no known history (N = 11, Age M = 42.01, SD = 12.43). All participants completed a full cognitive battery at baseline and on an annual basis. We compared first visit cognitive testing measuring CPS and EF using: National Institute of Health (NIH) Toolbox [Pattern Comparison (NIH-PC), Flanker, Dimensional Card Sorting Task (NIH-DCCS)], Trail Making Test (TMT) A and B, and Delis-Kaplan Executive Function System (D-KEFS) Color-Word Interference Test (CWIT).
Results:
Independent t-tests and Mann-Whitney U tests compared cognitive test performance between groups. Across all cognitive test measures assessed, none exhibited significant differences between groups after Bonferroni correction for N=10 tests (corrected P > 0.05). Mean scores for mutation carriers were non-significantly lower than controls on TMT-B (Z-score Mdn = .29, SD = 1.33 vs. Z-score Mdn = .96, SD = .97), NIH-PC (Age-corrected Standard Score [ACSS] M = 100.13, SD = 20.76 vs. ACSS score M = 114.82, SD = 14.61) and NIH-Flanker (ACSS score M = 83.58, SD = 9.72 vs. ACSS score M = 90.64, SD = 10.94), and NIH-DCCS (ACSS M = 101.29, SD = 16.37 vs. ACSS score M = 112.00, SD = 16.28) but not for TMT-A or all four conditions of CWIT.
Conclusions:
We did not detect any significant cognitive deficits in known PRNP mutation carriers. This is consistent with the lack of prodromal pathological biomarker changes or cognitive changes as reported in Vallabh et al 2020, and with the finding of Mole et al. 2021 that most tests reveal impairment only at a stage where carriers report subjective symptoms. Our results suggest an opportunity for primary prevention to preserve full cognitive health in at-risk individuals. However, small sample size and limited test sensitivity may leave us underpowered to detect subtle deficits. Future research is warranted to further investigate the neuropsychological profile of pre-symptomatic GPD.
Cognitive dysfunction is prominent in homeless and precariously housed persons, and memory dysfunction is the most pervasive domain. The presence of multimorbid physical and mental illness suggests that several underlying mechanisms of memory impairment may be at play. The serial position phenomenon describes the tendency to best recall the beginning (primacy effect) and last (recency effect) words on a supra-span wordlist. Recency recall engages executive and working-memory systems, whereas primacy recall depends on long-term memory. This study investigates memory dysfunction in a homeless and precariously housed sample by identifying and characterizing unique subtypes of serial position profiles on a test of verbal memory.
Participants and Methods:
Data were used from a 20-year study of homeless and precariously housed adults recruited from an impoverished neighbourhood in Vancouver, Canada. Participants were sub-grouped according to their serial position profile on the Hopkins Verbal Learning Test-Revised using a latent profile analysis (LPA; n = 411). Paired samples t-tests were conducted to determine differences in percent recall from each word-list region within classes. Linear regression analyses were used to examine between-class differences in mean serial position scores and other cognitive measures (memory, attention, processing speed, cognitive control). Covariates included age, sex, and education.
Results:
LPA identified two profiles characterized by (1) reduced primacy relative to recency (RP; n = 150); and (2) reduced recency relative to primacy (RR; n = 261). Pairwise comparisons within the RP class showed that recency was better than primacy (p < .001, d = .66) and middle recall (p < .001, d = .52), with no difference between primacy and middle recall (p = .68, d = .04). All pairwise comparisons differed within the RR class (primacy > middle recall: p < .001, d = 1.85; primacy > recency recall: p < .001, d = 1.32; middle > recency recall: p < .05, d = .132). The RP class had worse performance on measures of total immediate (ß = .47, p < .001) and delayed verbal recall (ß = .32, p < .001); processing speed (ß = .20, p < .001); and cognitive control (ß = .22, p < .001). The RR class made more repetition errors (ß = .25, p < .001).
Conclusions:
These findings support substantial heterogeneity in memory functioning in homeless and precariously housed individuals. The RP profile was characterized by poorer cognitive functioning across several domains, which suggests multiple contributions to memory impairment, including dysfunction of long-term memory circuitry. The RR profile with their higher number of repetition errors, may experience difficulties with self-monitoring in verbal learning. Subsequent studies will explore the neurobiological underpinnings of these subgroups to further characterize profiles and identify targets for cognitive intervention.
Precariously housed individuals are exposed to multiple adverse factors negatively impacting neurocognitive functioning. Additionally, this population is subjected to poor life outcomes, such as impaired psychosocial functioning. Neurocognitive functioning plays an important role in psychosocial functioning and may be especially critical for precariously housed individuals who face numerous barriers in their daily lives. However, few studies have explicitly examined the cognitive determinants of functional outcomes in this population. Cognitive intraindividual variability (IIV) involves the study of within-person differences in neurocognitive functioning and has been used as marker of frontal system pathology. Increased IIV has been associated with worse cognitive performance, cognitive decline, and poorer everyday functioning. Hence, IIV may add to the predictive utility of commonly used neuropsychological measures and may serve as an emergent predictor of poor outcomes in at-risk populations. The objective of the current study was to examine IIV as a unique index of the neurocognitive contributions to functional outcomes within a large sample of precariously housed individuals. It was hypothesized that greater IIV would be associated with poorer current (i.e., baseline) and long-term (i.e., up to 12 years) psychosocial functioning.
Participants and Methods:
Four hundred and thirty-seven adults were recruited from single-room occupancy hotels located in the Downtown Eastside of Vancouver, Canada (Mage = 44 years, 78% male) between November 2008 and November 2021. Baseline neurocognitive functioning was assessed at study enrolment. Scores from the Social and Occupational Functioning Assessment Scale (SOFAS), the Role Functioning Scale (RFS), the physical component score (PCS) and the mental component score (MCS) of the 36-Item Short Form Survey Instrument were obtained at participants’ baseline assessments and at their last available follow-up assessment to represent baseline and long-term psychosocial functioning, respectively. Using an established formula, an index of IIV was derived using a battery of standardized tests that broadly assessed verbal learning and memory, sustained attention, mental flexibility, and cognitive control. A series of multiple linear regressions were conducted to predict baseline and long-term social and role functioning (average across SOFAS and RFS scores), and PCS and MCS scores from IIV. In each of the models, we also included common predictors of functioning, including a global cognitive composite score, age, and years of education.
Results:
The IIV index and the global composite score did not explain a significant proportion of the variance in baseline and long-term social and role functioning (p > .05). However, IIV was a significant predictor of baseline (B = -3.84, p = .021) and long-term (B = -3.58, p = .037) PCS scores, but not MCS scores (p > .05). The global composite score did not predict baseline or long-term PCS scores.
Conclusions:
IIV significantly predicted baseline and long-term physical functioning, but not mental functioning or social and role functioning, suggesting that IIV may be a sensitive marker for limitations in everyday functioning due to physical health problems in precariously housed individuals. Critically, the present study is the first to show that IIV may be a useful index for predicting poor long-term health-related outcomes in this population compared to traditional neuropsychological measures.
A field study was conducted twice in Elizabeth, MS, at on-farm sites in 2010–11 and 2011–12, and twice in 2012–13 at Mississippi State University’s Delta Research and Extension Center in Stoneville, MS, to evaluate glyphosate-resistant (GR) Italian ryegrass control and crop response to fall treatments followed by postemergence herbicide treatments in winter and/or spring. Italian ryegrass was controlled ≥92% and 61% following S-metolachlor and tillage 77 d after fall treatments (DA-FT), respectively. S-metolachlor fall treatment provided 33% greater control than clethodim winter treatment at 21 d after winter treatments (DA-WT). Tillage fall treatment followed by (fb) clethodim winter treatment fb paraquat spring treatment provided similar control (93%) to treatments containing S-metolachlor fall treatment fb a winter or spring herbicide treatment (≥93%) 24 d after spring treatments (DA-ST). Greatest soybean and corn density and yield were also observed following programs containing S-metolachlor fall treatment. Sequential postemergence herbicide treatments were not required to increase corn and soybean density and yield when S-metolachlor was used as a fall treatment. Growers have the best opportunity to maximize GR Italian ryegrass control when S-metolachlor fb a winter or spring herbicide treatment is used.
Substantial progress has been made in the standardization of nomenclature for paediatric and congenital cardiac care. In 1936, Maude Abbott published her Atlas of Congenital Cardiac Disease, which was the first formal attempt to classify congenital heart disease. The International Paediatric and Congenital Cardiac Code (IPCCC) is now utilized worldwide and has most recently become the paediatric and congenital cardiac component of the Eleventh Revision of the International Classification of Diseases (ICD-11). The most recent publication of the IPCCC was in 2017. This manuscript provides an updated 2021 version of the IPCCC.
The International Society for Nomenclature of Paediatric and Congenital Heart Disease (ISNPCHD), in collaboration with the World Health Organization (WHO), developed the paediatric and congenital cardiac nomenclature that is now within the eleventh version of the International Classification of Diseases (ICD-11). This unification of IPCCC and ICD-11 is the IPCCC ICD-11 Nomenclature and is the first time that the clinical nomenclature for paediatric and congenital cardiac care and the administrative nomenclature for paediatric and congenital cardiac care are harmonized. The resultant congenital cardiac component of ICD-11 was increased from 29 congenital cardiac codes in ICD-9 and 73 congenital cardiac codes in ICD-10 to 318 codes submitted by ISNPCHD through 2018 for incorporation into ICD-11. After these 318 terms were incorporated into ICD-11 in 2018, the WHO ICD-11 team added an additional 49 terms, some of which are acceptable legacy terms from ICD-10, while others provide greater granularity than the ISNPCHD thought was originally acceptable. Thus, the total number of paediatric and congenital cardiac terms in ICD-11 is 367. In this manuscript, we describe and review the terminology, hierarchy, and definitions of the IPCCC ICD-11 Nomenclature. This article, therefore, presents a global system of nomenclature for paediatric and congenital cardiac care that unifies clinical and administrative nomenclature.
The members of ISNPCHD realize that the nomenclature published in this manuscript will continue to evolve. The version of the IPCCC that was published in 2017 has evolved and changed, and it is now replaced by this 2021 version. In the future, ISNPCHD will again publish updated versions of IPCCC, as IPCCC continues to evolve.
The application of paraquat mixtures with residual herbicides before planting rice is a common treatment in Mississippi, and rice in proximity is susceptible to off-target movement of these applications. Four concurrent studies were conducted in Stoneville, MS, to characterize rice performance following exposure to a sublethal rate of paraquat, metribuzin, fomesafen, and cloransulam-methyl at different application timings. Herbicides were applied to rice at the growth stages of spiking to one-leaf (VEPOST), two- to three-leaf (EPOST), three- to four-leaf (MPOST), 7 d postflood (PFLD), and panicle differentiation (PD). Regardless of application timing, rice injury following exposure to paraquat was ≥45%. Delays in maturity were increased by 0.3 d d−1 following paraquat from emergence through PD. Dry weight, rough rice yield, panicle density, and germination were reduced by 18.7 g, 131.5 kg ha−1, 5.6 m−2, and 0.3%, respectively, per day from application of paraquat at emergence through PD. By 28 d after treatment (DAT), metribuzin injured rice 3% to 6%, and that injury did not translate into a yield reduction. Regardless of application timing, rice injury following fomesafen application ranged from 2% to 5% 28 DAT. Rice exposed to cloransulam-methyl EPOST exhibited the greatest root and foliar injury 21 DAT and 28 DAT, respectively. Additionally, when rice was exposed to cloransulam-methyl EPOST, yield was reduced to 6,540 kg ha−1 compared with a yield of 7,850 kg ha−1 from nontreated rice. Rice yield was negatively affected after paraquat was applied any time after rice emergence. However, applications of paraquat to rice at early reproductive growth stages reduced rough rice yield and seed germination the greatest. Application timing is crucial in determining severity of rice injury. Early-season injury to rice following paraquat application had less effect on yield compared with injury at later stages. Additionally, fields devoted to seed rice production are at risk for reduced seed germination if they are exposed to paraquat during early reproductive growth stages.
To identify factors that increase the microbial load in the operating room (OR) and recommend solutions to minimize the effect of these factors.
Design:
Observation and sampling study.
Setting:
Academic health center, public hospitals.
Methods:
We analyzed 4 videotaped orthopedic surgeries (15 hours in total) for door openings and staff movement. The data were translated into a script denoting a representative frequency and location of movements for each OR team member. These activities were then simulated for 30 minutes per trial in a functional operating room by the researchers re-enacting OR staff-member roles, while collecting bacteria and fungi using settle plates. To test the hypotheses on the influence of activity on microbial load, an experimental design was created in which each factor was tested at higher (and lower) than normal activity settings for a 30-minute period. These trials were conducted in 2 phases.
Results:
The frequency of door opening did not independently affect the microbial load in the OR. However, a longer duration and greater width of door opening led to increased microbial load in the OR. Increased staff movement also increased the microbial load. There was a significantly higher microbial load on the floor than at waist level.
Conclusions:
Movement of staff and the duration and width of door opening definitely affects the OR microbial load. However, further investigation is needed to determine how the number of staff affects the microbial load and how to reduce the microbial load at the surgical table.
The Fontan Outcomes Network was created to improve outcomes for children and adults with single ventricle CHD living with Fontan circulation. The network mission is to optimise longevity and quality of life by improving physical health, neurodevelopmental outcomes, resilience, and emotional health for these individuals and their families. This manuscript describes the systematic design of this new learning health network, including the initial steps in development of a national, lifespan registry, and pilot testing of data collection forms at 10 congenital heart centres.
In 2017, we surveyed long-term care facilities in Pennsylvania regarding antimicrobial stewardship and infection prevention and control (IPC) practices. Among 244 responding facilities, 93% had IPC programs and 47% had antimicrobial stewardship programs. There was significant variation in practices across facilities, and a number of program implementation challenges were identified.
Objectives: Studies of neurocognitively elite older adults, termed SuperAgers, have identified clinical predictors and neurobiological indicators of resilience against age-related neurocognitive decline. Despite rising rates of older persons living with HIV (PLWH), SuperAging (SA) in PLWH remains undefined. We aimed to establish neuropsychological criteria for SA in PLWH and examined clinically relevant correlates of SA. Methods: 734 PLWH and 123 HIV-uninfected participants between 50 and 64 years of age underwent neuropsychological and neuromedical evaluations. SA was defined as demographically corrected (i.e., sex, race/ethnicity, education) global neurocognitive performance within normal range for 25-year-olds. Remaining participants were labeled cognitively normal (CN) or impaired (CI) based on actual age. Chi-square and analysis of variance tests examined HIV group differences on neurocognitive status and demographics. Within PLWH, neurocognitive status differences were tested on HIV disease characteristics, medical comorbidities, and everyday functioning. Multinomial logistic regression explored independent predictors of neurocognitive status. Results: Neurocognitive status rates and demographic characteristics differed between PLWH (SA=17%; CN=38%; CI=45%) and HIV-uninfected participants (SA=35%; CN=55%; CI=11%). In PLWH, neurocognitive groups were comparable on demographic and HIV disease characteristics. Younger age, higher verbal IQ, absence of diabetes, fewer depressive symptoms, and lifetime cannabis use disorder increased likelihood of SA. SA reported increased independence in everyday functioning, employment, and health-related quality of life than non-SA. Conclusions: Despite combined neurological risk of aging and HIV, youthful neurocognitive performance is possible for older PLWH. SA relates to improved real-world functioning and may be better explained by cognitive reserve and maintenance of cardiometabolic and mental health than HIV disease severity. Future research investigating biomarker and lifestyle (e.g., physical activity) correlates of SA may help identify modifiable neuroprotective factors against HIV-related neurobiological aging. (JINS, 2019, 25, 507–519)
The development of laser wakefield accelerators (LWFA) over the past several years has led to an interest in very compact sources of X-ray radiation – such as “table-top” free electron lasers. However, the use of conventional undulators using permanent magnets also implies system sizes which are large. In this work, we assess the possibilities for the use of novel mini-undulators in conjunction with a LWFA so that the dimensions of the undulator become comparable with the acceleration distances for LWFA experiments (i.e., centimeters). The use of a prototype undulator using laser machining of permanent magnets for this application is described and the emission characteristics and limitations of such a system are determined. Preliminary electron propagation and X-ray emission measurements are taken with a LWFA electron beam at the University of Michigan.
In an effort to optimize patient outcomes, considerable attention is being devoted to identifying patient characteristics associated with major depressive disorder (MDD) and its responsiveness to treatment. In the current study, we extend this work by evaluating whether early change in these sensitivities is associated with response to antidepressant treatment for MDD.
Methods
Participants included 210 patients with MDD who were treated with 8 weeks of escitalopram and 112 healthy comparison participants. Of the original 210 patients, 90 non-responders received adjunctive aripiprazole for an additional 8 weeks. Symptoms of depression and anhedonia were assessed at the beginning of treatment and 8 weeks later in both samples. Reward and punishment sensitivity were assessed using the BIS/BAS scales measured at the initiation of treatment and 2 weeks later.
Results
Individuals with MDD exhibited higher punishment sensitivity and lower reward sensitivity compared with healthy comparison participants. Change in reward sensitivity during the first 2 weeks of treatment was associated with improved depressive symptoms and anhedonia following 8 weeks of treatment with escitalopram. Similarly, improvement in reward responsiveness during the first 2 weeks of adjunctive therapy with aripiprazole was associated with fewer symptoms of depression at post-treatment.
Conclusions
Findings highlight the predictive utility of early change in reward sensitivity during antidepressant treatment for major depression. In a clinical setting, a lack of change in early reward processing may signal a need to modify a patient's treatment plan with alternative or augmented treatment approaches.
Cerebrovascular reactivity monitoring has been used to identify the lower limit of pressure autoregulation in adult patients with brain injury. We hypothesise that impaired cerebrovascular reactivity and time spent below the lower limit of autoregulation during cardiopulmonary bypass will result in hypoperfusion injuries to the brain detectable by elevation in serum glial fibrillary acidic protein level.
Methods
We designed a multicentre observational pilot study combining concurrent cerebrovascular reactivity and biomarker monitoring during cardiopulmonary bypass. All children undergoing bypass for CHD were eligible. Autoregulation was monitored with the haemoglobin volume index, a moving correlation coefficient between the mean arterial blood pressure and the near-infrared spectroscopy-based trend of cerebral blood volume. Both haemoglobin volume index and glial fibrillary acidic protein data were analysed by phases of bypass. Each patient’s autoregulation curve was analysed to identify the lower limit of autoregulation and optimal arterial blood pressure.
Results
A total of 57 children had autoregulation and biomarker data for all phases of bypass. The mean baseline haemoglobin volume index was 0.084. Haemoglobin volume index increased with lowering of pressure with 82% demonstrating a lower limit of autoregulation (41±9 mmHg), whereas 100% demonstrated optimal blood pressure (48±11 mmHg). There was a significant association between an individual’s peak autoregulation and biomarker values (p=0.01).
Conclusions
Individual, dynamic non-invasive cerebrovascular reactivity monitoring demonstrated transient periods of impairment related to possible silent brain injury. The association between an impaired autoregulation burden and elevation in the serum brain biomarker may identify brain perfusion risk that could result in injury.
Hosts face mortality from parasitic and environmental stressors, but interactions of parasitism with other stressors are not well understood, particularly for long-lived hosts. We monitored survival of flour beetles (Tribolium confusum) in a longitudinal design incorporating cestode (Hymenolepis diminuta) infection, starvation and exposure to the pesticide diatomaceous earth (DE). We found that cestode cysticercoids exhibit increasing morphological damage and decreasing ability to excyst over time, but were never eliminated from the host. In the presence of even mild environmental stressors, host lifespan was reduced sufficiently that extensive degradation of cysticercoids was never realized. Median host lifespan was 200 days in the absence of stressors, and 3–197 days with parasitism, starvation and/or DE. Early survival of parasitized hosts was higher relative to controls in the presence of intermediate concentrations of DE, but reduced under all other conditions tested. Parasitism increased host mortality in the presence of other stressors at times when parasitism alone did not cause mortality, consistent with an interpretation of synergy. Environmental stressors modified the parasite numbers needed to reveal intensity-dependent host mortality, but only rarely masked intensity dependence. The longitudinal approach produced observations that would have been overlooked or misinterpreted if survival had only been monitored at a single time point.
Objectives: The present study examined differences in neurocognitive outcomes among non-Hispanic Black and White stroke survivors using the NIH Toolbox-Cognition Battery (NIHTB-CB), and investigated the roles of healthcare variables in explaining racial differences in neurocognitive outcomes post-stroke. Methods: One-hundred seventy adults (91 Black; 79 White), who participated in a multisite study were included (age: M=56.4; SD=12.6; education: M=13.7; SD=2.5; 50% male; years post-stroke: 1–18; stroke type: 72% ischemic, 28% hemorrhagic). Neurocognitive function was assessed with the NIHTB-CB, using demographically corrected norms. Participants completed measures of socio-demographic characteristics, health literacy, and healthcare use and access. Stroke severity was assessed with the Modified Rankin Scale. Results: An independent samples t test indicated Blacks showed more neurocognitive impairment (NIHTB-CB Fluid Composite T-score: M=37.63; SD=11.67) than Whites (Fluid T-score: M=42.59, SD=11.54; p=.006). This difference remained significant after adjusting for reading level (NIHTB-CB Oral Reading), and when stratified by stroke severity. Blacks also scored lower on health literacy, reported differences in insurance type, and reported decreased confidence in the doctors treating them. Multivariable models adjusting for reading level and injury severity showed that health literacy and insurance type were statistically significant predictors of the Fluid cognitive composite (p<.001 and p=.02, respectively) and significantly mediated racial differences on neurocognitive impairment. Conclusions: We replicated prior work showing that Blacks are at increased risk for poorer neurocognitive outcomes post-stroke than Whites. Health literacy and insurance type might be important modifiable factors influencing these differences. (JINS, 2017, 23, 640–652)