We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Blast injuries can occur by a multitude of mechanisms, including improvised explosive devices (IEDs), military munitions, and accidental detonation of chemical or petroleum stores. These injuries disproportionately affect people in low- and middle-income countries (LMICs), where there are often fewer resources to manage complex injuries and mass-casualty events.
Study Objective:
The aim of this systematic review is to describe the literature on the acute facility-based management of blast injuries in LMICs to aid hospitals and organizations preparing to respond to conflict- and non-conflict-related blast events.
Methods:
A search of Ovid MEDLINE, Scopus, Global Index Medicus, Web of Science, CINAHL, and Cochrane databases was used to identify relevant citations from January 1998 through July 2024. This systematic review was conducted in adherence with PRISMA guidelines. Data were extracted and analyzed descriptively. A meta-analysis calculated the pooled proportions of mortality, hospital admission, intensive care unit (ICU) admission, intubation and mechanical ventilation, and emergency surgery.
Results:
Reviewers screened 3,731 titles and abstracts and 173 full texts. Seventy-five articles from 22 countries were included for analysis. Only 14.7% of included articles came from low-income countries (LICs). Sixty percent of studies were conducted in tertiary care hospitals. The mean proportion of patients who were admitted was 52.1% (95% CI, 0.376 to 0.664). Among all in-patients, 20.0% (95% CI, 0.124 to 0.288) were admitted to an ICU. Overall, 38.0% (95% CI, 0.256 to 0.513) of in-patients underwent emergency surgery and 13.8% (95% CI, 0.023 to 0.315) were intubated. Pooled in-patient mortality was 9.5% (95% CI, 0.046 to 0.156) and total hospital mortality (including emergency department [ED] mortality) was 7.4% (95% CI, 0.034 to 0.124). There were no significant differences in mortality when stratified by country income level or hospital setting.
Conclusion:
Findings from this systematic review can be used to guide preparedness and resource allocation for acute care facilities. Pooled proportions for mortality and other outcomes described in the meta-analysis offer a metric by which future researchers can assess the impact of blast events. Under-representation of LICs and non-tertiary care medical facilities and significant heterogeneity in data reporting among published studies limited the analysis.
Education is known to impact neuropsychological test performance, and self-reported years of education is often used in stratifying normative data. However, this variable does not always reflect education quality, particularly among underrepresented populations, and may overestimate cognitive impairment in individuals with low education quality. This cross-sectional study evaluated relative contributions of years of education and reading level to several verbally mediated assessments to improve interpretation of neuropsychological performance.
Participants and Methods:
Data was obtained from the Vanderbilt Memory and Aging Project. Cognitively-unimpaired participants (n=175, 72±7 years, 59% male, 87% Non-Hispanic White, 16±2 years of education) completed a comprehensive neuropsychological protocol. Stepwise linear regressions were calculated using education and Wide Range Achievement Test (WRAT)-3 Reading subtest scores as predictors and letter fluency (FAS, CFL), category fluency (Vegetable and Animal Naming), the Boston Naming Test (BNT), and California Verbal Learning Test (CVLT)-II as outcomes to assess increase in variance explained by educational quality. Models covaried for age and sex. The False Discovery Rate (FDR) based on the Benjamini-Hochberg procedure (Benjamini & Hochberg, 1995) was used to correct for multiple comparisons.
Results:
The mean WRAT-3 score was 51±4 (range:37-57), indicating post-high school reading level. Education and WRAT-3 scores were moderately correlated (r=0.36, p<0.01). Both WRAT-3 and years of education independently predicted letter fluency (WRAT-3 p<0.001; education p<0.02), category fluency (WRAT-3 p<0.001; education p<0.05), and CVLT-II performance (WRAT-3 p-values<0.005; education p-values<0.02) in single predictor models. On BNT, WRAT-3 (p<0.001), but not education (p=0.06), predicted performance in single predictor models. In combined models with both WRAT-3 and education, WRAT-3 scores remained a significant predictor of FAS (WRAT-3 b=1.21, p<0.001; education b=0.006, p=0.99) and CFL performance (WRAT-3 b=1.02, p<0.001; education b=0.51, p=0.14). Both WRAT-3 (b=0.21, p=0.01) and years of education (b=0.35, p=0.03) predicted Animal Naming, while WRAT-3 (b=0.16,p=0.008), but not years of education (p=0.37), predicted Vegetable Naming. WRAT-3 was a significant predictor of BNT performance (b=0.21, p<0.001) but not years of education (p=0.65). WRAT-3 predicted CVLT-II learning (b=0.32, p=0.04), immediate recall (b=0.16, p=0.005), and delayed recall performances (b=0.15, p=0.005), while education did not (p-values>0.14). All significant results persisted after FDR correction. WRAT-3 scores explained an additional level of variance beyond the covariates and education alone for FAS (AR=18%), CFL (AR=13%), Animal Naming and Vegetable Naming (AR= 3%), BNT (AR=18%), and CVLT-II learning (AR=2%), immediate recall (AR=4%), and delayed recall (AR=3%).
Conclusions:
Reading level more strongly associated with performance on several verbally mediated neuropsychological measures than years of education. For all measures, the addition of reading level significantly increased the amount of variance explained by the model compared to covariates and education alone, which aligns with existing research. However, most of this past work looks at individuals with lower levels of educational quality. Because our cohort was highly educated and at the upper end of the reading spectrum, our results suggest that reading level is important to consider even for more highly educated individuals. Therefore, reading level is a critical variable to consider when interpreting verbally mediated neuropsychological measures for individuals across the educational spectrum.
Neuropsychological (NP) tests are increasingly computerized, which automates testing, scoring, and administration. These innovations are well-suited for use in resource-limited settings, such as low- to middle- income countries (LMICs), which often lack specialized testing resources (e.g., trained staff, forms, norms, equipment). Despite this, there is a dearth of research on their acceptability and usability which could affect performance, particularly in LMICs with varying levels of access to computer technology. NeuroScreen is a tablet-based battery of tests assessing learning, memory, working memory, processing speed, executive functions, and motor speed. This study evaluated the acceptability and usability of NeuroScreen among two groups of LMIC adolescents with and without HIV from Cape Town, South Africa and Kampala, Uganda.
Participants and Methods:
Adolescents in Cape Town (n=131) and Kampala (n=80) completed NeuroScreen and questions about their use and ownership of, as well as comfort with computer technology and their experiences completing NeuroScreen. Participants rated their technology use -comfort with and ease-of-use of computers, tablets, smartphones, and NeuroScreen on a Likert-type scale: (1) Very Easy/Very Comfortable to (6) Very Difficult/Very Uncomfortable. For analyses, responses of Somewhat Easy/Comfortable to Very Easy/Comfortable were collapsed to codify comfort and ease. Descriptive statistics assessed technology use and experiences of using the NeuroScreen tool. A qualitative question asked how participants would feel receiving NeuroScreen routinely in the future; responses were coded as positive, negative, or neutral (e.g., “I would enjoy it”). Chi-squares assessed for group differences.
Results:
South African adolescents were 15.42 years on average, 50.3% male, and 49% were HIV-positive. Ugandan adolescents were 15.64 years on average, 50.6% male, and 54% HIVpositive. South African participants were more likely than Ugandan participants to have ever used a computer (71% vs. 49%; p<.005), or tablet (58% vs. 40%; p<.05), whereas smartphone use was similar (94% vs 87%). South African participants reported higher rates of comfort using a computer (86% vs. 46%; p<.001) and smartphone (96% vs. 88%; p<.05) compared to Ugandan participants. Ugandan adolescents rated using NeuroScreen as easier than South African adolescents (96% vs. 87%; p<.05).). Regarding within-sample differences by HIV status, Ugandan participants with HIV were less likely to have used a computer than participants without HIV (70% vs. 57%; p<.05, respectively).The Finger Tapping test was rated as the easiest by both South African (73%) and Ugandan (64%) participants. Trail Making was rated as the most difficult test among Ugandan participants (37%); 75% of South African participants reported no tasks as difficult followed by Finger Tapping as most difficult (8%). When asked about completing NeuroScreen at routine doctor’s visits, most South Africans (85%) and Ugandans (72%) responded positively.
Conclusions:
This study found that even with low prior tablet use and varying levels of comfort in using technology, South African and Ugandan adolescents rated NeuroScreen with high acceptability and usability. These data suggest that scaling up NeuroScreen in LMICs, where technology use might be limited, may be appropriate for adolescent populations. Further research should examine prior experience and comfort with tablets as predictors NeuroScreen test performance.
Novel blood-based biomarkers for Alzheimer's disease (AD) could transform AD diagnosis in the community; however, their interpretation in individuals with medical comorbidities is not well understood. Specifically, kidney function has been shown to influence plasma levels of various brain proteins. This study sought to evaluate the effect of one common marker of kidney function (estimated glomerular filtration rate (eGFR)) on the association between various blood-based biomarkers of AD/neurodegeneration (glial fibrillary acidic protein (GFAP), neurofilament light (NfL), amyloid-b42 (Ab42), total tau) and established CSF biomarkers of AD (Ab42/40 ratio, tau, phosphorylated-tau (p-tau)), neuroimaging markers of AD (AD-signature region cortical thickness), and episodic memory performance.
Participants and Methods:
Vanderbilt Memory and Aging Project participants (n=329, 73±7 years, 40% mild cognitive impairment, 41% female) completed fasting venous blood draw, fasting lumbar puncture, 3T brain MRI, and neuropsychological assessment at study entry and at 18-month, 3-year, and 5-year follow-up visits. Plasma GFAP, Ab42, total tau, and NfL were quantified on the Quanterix single molecule array platform. CSF biomarkers for Ab were quantified using Meso Scale Discovery immunoassays and tau and p-tau were quantified using INNOTEST immunoassays. AD-signature region atrophy was calculated by summing bilateral cortical thickness measurements captured on T1-weighted brain MRI from regions shown to distinguish individuals with AD from normal cognition. Episodic memory functioning was measured using a previously developed composite score. Linear mixed-effects regression models related predictors to each outcome adjusting for age, sex, education, race/ethnicity, apolipoprotein E-e4 status, and cognitive status. Models were repeated with a blood-based biomarker x eGFR x time interaction term with follow-up models stratified by chronic kidney disease (CKD) staging (stage 1/no CKD: eGFR>90 mL/min/1.73m2, stage 2: eGFR=60-89 mL/min/1.73m2; stage 3: eGFR=44-59mL/min/1.73m2 (no participants with higher than stage 3)).
Results:
Cross-sectionally, GFAP was associated with all outcomes (p-values<0.005) and NfL was associated with memory and AD-signature region cortical thickness (p-values<0.05). In predictor x eGFR interaction models, GFAP and NfL interacted with eGFR on AD-signature cortical thickness, (p-values<0.004) and Ab42 interacted with eGFR on tau, p-tau, and memory (p-values<0.03). Tau did not interact with eGFR. Stratified models across predictors showed that associations were stronger in individuals with better renal functioning and no significant associations were found in individuals with stage 3 CKD. Longitudinally, higher GFAP and NfL were associated with memory decline (p-values<0.001). In predictor x eGFR x time interaction models, GFAP and NfL interacted with eGFR on p-tau (p-values<0.04). Other models were nonsignificant. Stratified models showed that associations were significant only in individuals with no CKD/stage 1 CKD and were not significant in participants with stage 2 or 3 CKD.
Conclusions:
In this community-based sample of older adults free of dementia, plasma biomarkers of AD/neurodegeneration were associated with AD-related clinical outcomes both cross-sectionally and longitudinally; however, these associations were modified by renal functioning with no associations in individuals with stage 3 CKD. These results highlight the value of blood-based biomarkers in individuals with healthy renal functioning and suggest caution in interpreting these biomarkers in individuals with mild to moderate CKD.
To summarize presentations and discussions from the 2022 trans-agency workshop titled “Overlapping science in radiation and sulfur mustard (SM) exposures of skin and lung: Consideration of models, mechanisms, organ systems, and medical countermeasures.”
Methods:
Summary on topics includes: (1) an overview of the radiation and chemical countermeasure development programs and missions; (2) regulatory and industry perspectives for drugs and devices; 3) pathophysiology of skin and lung following radiation or SM exposure; 4) mechanisms of action/targets, biomarkers of injury; and 5) animal models that simulate anticipated clinical responses.
Results:
There are striking similarities between injuries caused by radiation and SM exposures. Primary outcomes from both types of exposure include acute injuries, while late complications comprise chronic inflammation, oxidative stress, and vascular dysfunction, which can culminate in fibrosis in both skin and lung organ systems. This workshop brought together academic and industrial researchers, medical practitioners, US Government program officials, and regulators to discuss lung-, and skin- specific animal models and biomarkers, novel pathways of injury and recovery, and paths to licensure for products to address radiation or SM injuries.
Conclusions:
Regular communications between the radiological and chemical injury research communities can enhance the state-of-the-science, provide a unique perspective on novel therapeutic strategies, and improve overall US Government emergency preparedness.
Heavy episodic drinking (HED) is a major public health concern, and youth who engage in HED are at increased risk for alcohol-related problems that continue into adulthood. Importantly, there is heterogeneity in the onset and course of adolescent HED, as youth exhibit different trajectories of initiation and progression into heavy drinking. Much of what is known about the etiology of adolescent HED and alcohol-related problems that persist into adulthood comes from studies of predominantly White, middle-class youth. Because alcohol use and related problems vary by race/ethnicity and socioeconomic status, it is unclear whether previous findings are relevant for understanding developmental antecedents and distal consequences of adolescent HED for minoritized individuals. In the current study, we utilize a developmental psychopathology perspective to fill this gap in the literature. Using a racially and economically diverse cohort followed from adolescence well into adulthood, we apply group-based trajectory modeling (GBTM) to identify patterns of involvement in HED from age 14 to 17 years. We then investigate developmental antecedents of GBTM class membership, and alcohol-related distal outcomes in adulthood (∼ age 31 years) associated with GBTM class membership. Results highlight the importance of adolescent alcohol use in predicting future alcohol use in adulthood.
This 17-year prospective study applied a social-development lens to the challenge of identifying long-term predictors of adult depressive symptoms. A diverse community sample of 171 individuals was repeatedly assessed from age 13 to age 30 using self-, parent-, and peer-report methods. As hypothesized, competence in establishing close friendships beginning in adolescence had a substantial long-term predictive relation to adult depressive symptoms at ages 27–30, even after accounting for prior depressive, anxiety, and externalizing symptoms. Intervening relationship difficulties at ages 23–26 were identified as part of pathways to depressive symptoms in the late twenties. Somewhat distinct paths by gender were also identified, but in all cases were consistent with an overall role of relationship difficulties in predicting long-term depressive symptoms. Implications both for early identification of risk as well as for potential preventive interventions are discussed.
Only a limited number of patients with major depressive disorder (MDD) respond to a first course of antidepressant medication (ADM). We investigated the feasibility of creating a baseline model to determine which of these would be among patients beginning ADM treatment in the US Veterans Health Administration (VHA).
Methods
A 2018–2020 national sample of n = 660 VHA patients receiving ADM treatment for MDD completed an extensive baseline self-report assessment near the beginning of treatment and a 3-month self-report follow-up assessment. Using baseline self-report data along with administrative and geospatial data, an ensemble machine learning method was used to develop a model for 3-month treatment response defined by the Quick Inventory of Depression Symptomatology Self-Report and a modified Sheehan Disability Scale. The model was developed in a 70% training sample and tested in the remaining 30% test sample.
Results
In total, 35.7% of patients responded to treatment. The prediction model had an area under the ROC curve (s.e.) of 0.66 (0.04) in the test sample. A strong gradient in probability (s.e.) of treatment response was found across three subsamples of the test sample using training sample thresholds for high [45.6% (5.5)], intermediate [34.5% (7.6)], and low [11.1% (4.9)] probabilities of response. Baseline symptom severity, comorbidity, treatment characteristics (expectations, history, and aspects of current treatment), and protective/resilience factors were the most important predictors.
Conclusions
Although these results are promising, parallel models to predict response to alternative treatments based on data collected before initiating treatment would be needed for such models to help guide treatment selection.
Fewer than half of patients with major depressive disorder (MDD) respond to psychotherapy. Pre-emptively informing patients of their likelihood of responding could be useful as part of a patient-centered treatment decision-support plan.
Methods
This prospective observational study examined a national sample of 807 patients beginning psychotherapy for MDD at the Veterans Health Administration. Patients completed a self-report survey at baseline and 3-months follow-up (data collected 2018–2020). We developed a machine learning (ML) model to predict psychotherapy response at 3 months using baseline survey, administrative, and geospatial variables in a 70% training sample. Model performance was then evaluated in the 30% test sample.
Results
32.0% of patients responded to treatment after 3 months. The best ML model had an AUC (SE) of 0.652 (0.038) in the test sample. Among the one-third of patients ranked by the model as most likely to respond, 50.0% in the test sample responded to psychotherapy. In comparison, among the remaining two-thirds of patients, <25% responded to psychotherapy. The model selected 43 predictors, of which nearly all were self-report variables.
Conclusions
Patients with MDD could pre-emptively be informed of their likelihood of responding to psychotherapy using a prediction tool based on self-report data. This tool could meaningfully help patients and providers in shared decision-making, although parallel information about the likelihood of responding to alternative treatments would be needed to inform decision-making across multiple treatments.
New care paradigms are required to enable remote life-saving interventions (RLSIs) in extreme environments such as disaster settings. Informatics may assist through just-in-time expert remote-telementoring (RTM) or video-modelling (VM). Currently, RTM relies on real-time communication that may not be reliable in some locations, especially if communications fail. Neither technique has been extensively developed however, and both may be required to be performed by inexperienced providers to save lives. A pilot comparison was thus conducted.
Methods:
Procedure-naïve Search-and-Rescue Technicians (SAR-Techs) performed a tube-thoracostomy (TT) on a surgical simulator, randomly allocated to RTM or VM. The VM group watched a pre-prepared video illustrating TT immediately prior, while the RTM group were remotely guided by an expert in real-time. Standard outcomes included success, safety, and tube-security for the TT procedure.
Results:
There were no differences in experience between the groups. Of the 13 SAR-Techs randomized to VM, 12/13 (92%) placed the TT successfully, safely, and secured it properly, while 100% (11/11) of the TT placed by the RTM group were successful, safe, and secure. Statistically, there was no difference (P = 1.000) between RTM or VM in safety, success, or tube security. However, with VM, one subject cut himself, one did not puncture the pleura, and one had barely adequate placement. There were no such issues in the mentored group. Total time was significantly faster using RTM (P = .02). However, if time-to-watch was discounted, VM was quicker (P = .000).
Conclusions:
Random evaluation revealed both paradigms have attributes. If VM can be utilized during “travel-time,” it is quicker but without facilitating “trouble shooting.” On the other hand, RTM had no errors in TT placement and facilitated guidance and remediation by the mentor, presumably avoiding failure, increasing safety, and potentially providing psychological support. Ultimately, both techniques appear to have merit and may be complementary, justifying continued research into the human-factors of performing RLSIs in extreme environments that are likely needed in natural and man-made disasters.
Resource-intensive interventions and education are susceptible to a lack of long-term sustainability and regression to the mean. The respiratory culture nudge changed reporting to “Commensal Respiratory Flora only: No S. aureus/MRSA or P. aeruginosa.” This study demonstrated sustained reduction in broad-spectrum antibiotic duration and long-term sustainability 3 years after implementation.
BASF Corp. has developed p-hydroxyphenylpyruvate dioxygenase (HPPD) inhibitor–resistant cotton and soybean that will allow growers to use isoxaflutole in future weed management programs. In 2019 and 2020, a multi-state non-crop research project was conducted to examine weed control following isoxaflutole applied preemergence alone and with several tank-mix partners at high and low labeled rates. At 28 d after treatment (DAT), Palmer amaranth was controlled ≥95% at six of seven locations with isoxaflutole plus the high rate of diuron or fluridone. These same combinations provided the greatest control 42 DAT at four of seven locations. Where large crabgrass was present, isoxaflutole plus the high rate of diuron, fluridone, pendimethalin, or S-metolachlor or isoxaflutole plus the low rate of fluometuron controlled large crabgrass ≥95% in two of three locations 28 DAT. In two of three locations, isoxaflutole plus the high rate of pendimethalin or S-metolachlor improved large crabgrass control 42 DAT when compared to isoxaflutole alone. At 21 DAT, morningglory was controlled ≥95% at all locations with isoxaflutole plus the high rate of diuron and at three of four locations with isoxaflutole plus the high rate of fluometuron. At 42 DAT at all locations, isoxaflutole plus diuron or fluridone and isoxaflutole plus the high rate of fluometuron improved morningglory control compared to isoxaflutole alone. These results suggest that isoxaflutole applied preemergence alone or in tank mixture is efficacious on a number of cross-spectrum annual weeds in cotton, and extended weed control may be achieved when isoxaflutole is tank-mixed with several soil-residual herbicides.
Background: Infection prevention and control (IPC) workflows are often retrospective and manual. New tools, however, have entered the field to facilitate rapid prospective monitoring of infections in hospitals. Although artificial intelligence (AI)–enabled platforms facilitate timely, on-demand integration of clinical data feeds with pathogen whole-genome sequencing (WGS), a standardized workflow to fully harness the power of such tools is lacking. We report a novel, evidence-based workflow that promotes quicker infection surveillance via AI-assisted clinical and WGS data analysis. The algorithm suggests clusters based on a combination of similar minimum inhibitory concentration (MIC) data, timing of sample collection, and shared location stays between patients. It helps to proactively guide IPC professionals during investigation of infectious outbreaks and surveillance of multidrug-resistant organisms and healthcare-acquired infections. Methods: Our team established a 1-year workgroup comprised of IPC practitioners, clinical experts, and scientists in the field. We held weekly roundtables to study lessons learned in an ongoing surveillance effort at a tertiary care hospital—utilizing Philips IntelliSpace Epidemiology (ISEpi), an AI-powered system—to understand how such a tool can enhance practice. Based on real-time case discussions and evidence from the literature, a workflow guidance tool and checklist were codified. Results: In our workflow, data-informed clusters posed by ISEpi underwent triage and expert follow-up analysis to assess: (1) likelihood of transmission(s); (2) potential vector(s) identity; (3) need to request WGS; and (4) intervention(s) to be pursued, if warranted. In a representative sample (spanning October 17, 2019, to November 7, 2019) of 67 total isolates suggested for inclusion in 19 unique cluster investigations, we determined that 9 investigations merited follow-up. Collectively, these 9 investigations involved 21 patients and required 115 minutes to review in ISEpi and an additional 70 minutes of review outside of ISEpi. After review, 6 investigations were deemed unlikely to represent a transmission; the other 3 had potential to represent transmission for which interventions would be performed. Conclusions: This study offers an important framework for adaptation of existing infection control workflow strategies to leverage the utility of rapidly integrated clinical and WGS data. This workflow can also facilitate time-sensitive decisions regarding sequencing of specific pathogens given the preponderance of available clinical data supporting investigations. In this regard, our work sets a new standard of practice: precision infection prevention (PIP). Ongoing effort is aimed at development of AI-powered capabilities for enterprise-level quality and safety improvement initiatives.
Funding: Philips Healthcare provided support for this study.
Disclosures: Alan Doty and Juan Jose Carmona report salary from Philips Healthcare.
OBJECTIVES/GOALS: Previous research has shown acute and chronic alcohol effects on cardiac function, including elevated heart rate (HR) and lowered heart rate variability (HRV). This study aimed to examine the relationship between cardiac reactivity and subjective response following intravenous (IV) alcohol in non-dependent drinkers. METHODS/STUDY POPULATION: Non-dependent drinkers (N = 46, average age = 25.2) completed a human laboratory IV alcohol self-administration (IV-ASA) session. Subjective response to alcohol was assessed using the Drug Effects Questionnaire (DEQ) and Alcohol Urge Questionnaire (AUQ). Drinking behavior was assessed using the Alcohol Timeline Followback (TLFB) and Alcohol Use Disorders Identification Test (AUDIT). HR was recorded using the Polar Pro Heart Rate monitor throughout the session. HRV measures were calculated using guidelines determined by the Task Force of the European Society of Cardiology and The North American Society of Pacing and Electrophysiology. RESULTS/ANTICIPATED RESULTS: Recent drinking history as measured by the AUDIT and TLFB was not significantly different by sex. Results showed heavier drinking measures (AUDIT and TLFB) were positively associated with HRV measures (all p-values < 0.02). Those who reported a greater increase in alcohol craving (AUQ score) and wanted more alcohol (DEQ) following an alcohol prime, showed a greater change in HRV (p < 0.005). When examining HRV change from baseline throughout the priming session, there was a significant sex interaction for NN50 (p < 0.03) and a trend for PNN50 (p-value < 0.07). DISCUSSION/SIGNIFICANCE OF IMPACT: Acute IV alcohol alters cardiac reactivity measures in non-dependent drinkers. Future directions include examining the role of sex in HRV changes during alcohol consumption during IV-ASA. Understanding the effect of alcohol on cardiac reactivity and physiology may help characterize those at risk for alcohol use disorders.
Assessing impact of treatment from the patient perspective provides additional information about treatment efficacy in major depressive disorder (MDD) trials.
Objectives
Pooled data from three identically designed clinical trials showed aripiprazole adjunctive to antidepressant therapy (ADT) was effective in treating MDD.1
Methods
Patients who completed an 8-week prospective ADT phase with inadequate response were randomized double-blind to 6-weeks adjunctive treatment with aripiprazole or placebo. The Quality of Life Enjoyment and Satisfaction Questionnaire-Short Form (Q-LES-Q-SF) is a 16-item, self-report measure to evaluate daily functioning, with higher scores indicating better satisfaction. Comparisons of mean change from baseline (Week 8) to Week 14 in Q-LES-Q-SF items and general subscores were performed using ANCOVA (LOCF).
Results
There was significant improvement in the Q-LES-Q-SF Overall-General subscore (total of items 1-14 expressed as a percentage of the maximum possible score) in the aripiprazole-treatment group (9.49% [n=507]) vs placebo (5.71% [n=492] p< 0.001). Placebo was significantly higher than aripiprazole in Physical Ability (placebo 0.13 vs aripiprazole 0.02, p=0.020). Aripiprazole was significantly higher than placebo in all other items except Physical Health and Vision. Aripiprazole also produced significant increases in both the Satisfaction with Medication (Item 15) (aripiprazole 0.36 vs placebo 0.20, p< 0.01) and Overall Satisfaction (Item 16) (aripiprazole 0.61 vs placebo 0.35, p< 0.001) scores.
Conclusions
Results emphasize that assessment of patient functioning and quality of life may have utility both in clinical trials and clinical practice.2
To evaluate the efficacy of aripiprazole adjunctive antidepressant therapy (ADT) with regard to functioning in patients with major depressive disorder (MDD) who did not achieve an adequate response with standard ADT.
Methods
Pooled data were analyzed from three nearly identically designed randomized, double-blind, placebo-controlled trials: CN138-139, CN138-163 and CN138-165. These included patients with MDD, without psychotic features, who had failed at least one ADT treatment in the present episode. Patients completing an 8-week prospective ADT phase with inadequate response were randomized to 6-weeks’ treatment with adjunctive aripiprazole (n=508) or placebo (n=494). Functioning was assessed using the Sheehan Disability Scale (SDS). Comparisons of mean change from baseline in total SDS score, and domains of family life, social life and work/school were performed using ANCOVA.
Results
Adjunctive aripiprazole produced significant improvements in total SDS (-1.2 on an adjusted scale of 1-10, with 10=worst level of functioning/1=best) vs adjunctive placebo (-0.7, p< 0.001). Adjunctive aripiprazole produced significant changes in the family life domain (-1.4 for adjunctive aripiprazole vs -0.7 for adjunctive placebo, p< 0.001) and the social life domain (-1.4 for adjunctive aripiprazole vs -0.7 for adjunctive placebo, p< 0.001). No difference between groups was observed on the work/school domain (-0.8 for adjunctive aripiprazole and -0.6 for adjunctive placebo, p=0.34).
Conclusions
Adjunctive aripiprazole showed significant improvements in overall SDS scores, and family and social life domains. Less change was observed in the work/school domain. The results emphasize that assessment of patient functioning may have utility both in clinical trials and clinical practice.
This is a secondary analysis of clinical trial data collected in 12 European countries. We examined changes in weight and weight-related quality of life among community patients with schizophrenia treated with aripiprazole (ARI) versus standard of care (SOC), consisting of other marketed atypical antipsychotics (olanzapine, quetiapine, and risperidone).
Method
Five-hundred and fifty-five patients whose clinical symptoms were not optimally controlled and/or experienced tolerability problems with current medication were randomized to ARI (10–30 mg/day) or SOC. Weight and weight-related quality of life (using the IWQOL-Lite) were assessed at baseline, and weeks 8, 18 and 26. Random regression analysis across all time points using all available data was used to compare groups on changes in weight and IWQOL-Lite. Meaningful change from baseline was also assessed.
Results
Participants were 59.7% male, with a mean age of 38.5 years (SD 10.9) and mean baseline body mass index of 27.2 (SD 5.1). ARI participants lost an average of 1.7% of baseline weight in comparison to a gain of 2.1% by SOC participants (p < 0.0001) at 26 weeks. ARI participants experienced significantly greater increases in physical function, self-esteem, sexual life, and IWQOL-Lite total score. At 26 weeks, 20.7% of ARI participants experienced meaningful improvements in IWQOL-Lite score, versus 13.5% of SOC participants. A clinically meaningful change in weight was also associated with a meaningful change in quality of life (p < 0.001). A potential limitation of this study was its funding by a pharmaceutical company.
Conclusions
Compared to standard of care, patients with schizophrenia treated with aripiprazole experienced decreased weight and improved weight-related quality of life over 26 weeks. These changes were both statistically and clinically significant.
The parasite-stress theory of values or sociality is a recent, encompassing perspective in human social psychology and behavior. As an ecological and evolutionary theory of peoples’ cultural values/core preferences, it applies widely across many domains of human social life and human affairs. It is a general theory of human culture and sociality. Fundamental to the theory is the behavioral immune system. The human behavioral immune system includes: psychological traits and manifest behaviors for avoiding contact with infectious diseases; behaviors of in-group social preference, altruism, alliance, and conformity that manage the negative effects of infectious diseases; mate choice to increase personal and offspring defense against parasites; culinary behavior; and components of personality. The contagion-avoidance aspect of behavioral immunity is much more than out-group avoidance and dislike (xenophobia). It also includes the preference for the natal or local region (philopatry) and hence avoidance of foreignness in people and places where novel parasites may occur. The parasite-stress theory has produced a cornucopia of newly discovered patterns and informed and reinterpreted previously described patterns in the behavior of individuals and at the level of cultures/societies and regions. In novel ways, it informs and synthesizes knowledge of major features of the social lives and societal-level affairs of people, ranging from prejudice and egalitarianism to personality, economic patterns, core values, interpersonal and intergroup violence, governmental systems, gender relations, family structure, and the genesis and maintenance of cultural diversity across the world.
Treerow vegetation abundance and biodiversity were measured in response to six orchard floor management strategies in organic peach in northern Utah for three growing seasons. A total of 32 weed species were observed in the treerow; the most common were field bindweed, dandelion, perennial grasses (e.g., red fescue and ryegrass), clovers, and prickly lettuce. Weed biomass was two to five times greater in unmanaged (living mulch) than in manipulated treatments. Tillage greatly reduced weeds for approximately one month; however, vegetation rebounded midseason. Tillage selected for species adapted to disturbance, such as common purslane and field bindweed. Straw mulch provided equivalent weed suppression to tillage in the early season. Straw required annual reapplication with material costs, labor, and weed-seed contamination (e.g., volunteer grains and quackgrass) as disadvantages. Plastic fabric mulch reduced weeds the most, but had high initial costs and required seasonal maintenance. Weed biomass declined within seasons and across the three years of the study, likely due to tree canopy shading. Neither birdsfoot trefoil nor a perennial grass mixture planted in the alleyways influenced treerow weeds. Our results demonstrate several viable alternatives to tillage for weed management in treerows of organic peach orchards in the Intermountain West.