We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Sponge-Sticks (SS) and ESwabs are frequently utilized for detection of multidrug-resistant organisms (MDROs) in the environment. Head-to-head comparisons of SS and ESwabs across recovery endpoints are limited.
Design:
We compared MDRO culture and non-culture-based recovery from (1) ESwabs, (2) cellulose-containing SS (CS), and (3) polyurethane-containing SS (PCS).
Methods:
Known quantities of each MDRO were pipetted on a stainless-steel surface and swabbed by each method. Samples were processed, cultured, and underwent colony counting. DNA was extracted from sample eluates, quantified, and underwent metagenomic next-generation sequencing (mNGS). MDROs underwent whole genome sequencing (WGS). MDRO recovery from paired patient perirectal and PCS-collected environmental samples from clinical studies was determined.
Setting:
Laboratory experiment, tertiary medical center, and long-term acute care facility.
Results:
Culture-based recovery varied across MDRO taxa, it was highest for vancomycin-resistant Enterococcus and lowest for carbapenem-resistant Pseudomonas aeruginosa (CRPA). Culture-based recovery was significantly higher for SS compared to ESwabs except for CRPA, where all methods performed poorly. Nucleic acid recovery varied across methods and MDRO taxa. Integrated WGS and mNGS analysis resulted in successful detection of antimicrobial resistance genes, construction of high-quality metagenome-assembled genomes, and detection of MDRO genomes in environmental metagenomes across methods. In paired patient and environmental samples, multidrug-resistant Pseudomonas aeruginosa (MDRP) environmental recovery was notably poor (0/123), despite detection of MDRP in patient samples (20/123).
Conclusions:
Our findings support the use of SS for the recovery of MDROs. Pitfalls of each method should be noted. Method selection should be driven by MDRO target and desired endpoint.
A promising approach to assessing research impact draws on the Translational Science Benefits Model (TSBM), an evaluation model that tracks the applied benefits of research in four domains: Clinical and Medical; Community and Public Health; Economic; and Policy and Legislative. However, standardized methods to verify TSBM benefit data, to aid in aggregating impact data within quantitative summaries, do not currently exist.
Methods:
A panel of 11 topic experts participated in a modified Delphi process for establishing content and face validity of a set of criteria for verifying qualitative TSBM data. Two survey rounds were completed by panelists, with a moderated discussion in between rounds to discuss criteria not reaching consensus. Criteria with panel consensus at or above 70% in the survey rounds were confirmed as validated.
Results:
Criteria fell into 9 categories: Content Relevant, Project Related, Who, Reach, What, How, Novel, Documented Evidence, and When. The Delphi process yielded 197 total criteria across the 30 benefits characterized by the TSBM (range = 5–8 criteria per benefit).
Discussion:
The results of this Delphi process lay the foundation for developing a TSBM coding tool for evaluating and quantifying TSBM data. Standardizing this process will enable data aggregation, group analysis, and the comparison of research impact across contexts.
OBJECTIVES/GOALS: To develop and validate a tool to systematically identify benefits accruing to research within the Translational Science Benefits Model (TSBM) framework. We used a Delphi panel to reach consensus among a group of experts on criteria required for a clinical, community, economic, or policy benefit to be verified as coming from research. METHODS/STUDY POPULATION: A coding tool with proposed criteria to verify each of the 30 benefits was created at UCI to confirm the TSBM benefits resulting from funded research. We convened 11 experts from 8 CTSA hubs, who consisted of evaluators (faculty and staff) with experience using the TSBM. A web-based survey was used for Round 1, followed by a panel discussion of remaining unvalidated criteria, and a Round 2 survey as the final decision for inclusion of items in the tool. Response options for each criterion were “yes, required” or “no, not required”. Criteria that reached consensus (>70% agreement) were considered validated for inclusion in the final version. Panelist suggested criteria in Round 1 were also incorporated in the Round 2 survey for consideration by the experts. RESULTS/ANTICIPATED RESULTS: In the web-based survey for Round 1, all 11 experts participated and 92% of criteria reached the determined consensus level (N = 157). The remaining 8% of the criteria (N = 13) were discussed during the panel meeting. The discussion, in which 8 experts participated, was moderated by UCI and took place virtually via Zoom. All experts were sent a recording of the discussion and given the opportunity to post comments online about the remaining criteria before, during, and for a day after the discussion. Round 2 will include 50 newly proposed criteria from panelists and the 13 criteria that did not reach consensus in Round 1. Based on the results of Round 2, the criteria that reach consensus will be included in the final version of the coding tool that can be used across all TSBM benefits. DISCUSSION/SIGNIFICANCE: Using the Delphi Methodology, we will have a standardized set of criteria that may be applied to determine whether a TSBM benefit has resulted from a specific research project or program. This standardization will allow for aggregation and comparison of data across CTSA hubs and further multi-level evaluation of impact.
Perceived cognitive dysfunction is a common feature of late-life depression (LLD) that is associated with diminished quality of life and greater disability. Similar associations have been demonstrated in individuals with Hoarding Disorder. The degree to which hoarding behaviors (HB) are associated with greater perceived cognitive dysfunction and disability in individuals with concurrent LLD is not known.
Participants and Methods:
Participants with LLD (N=83) completed measures of hoarding symptom severity (Savings Inventory-Revised; SI-R) and were classified into two groups based on HB severity: LLD+HB who exhibited significant HB (SI-R . 41, n = 25) and LLD with low HB (SI-R < 41, n = 58). Additional measures assessed depression severity (Hamilton Depression Rating Scale; HDRS), perceived cognitive difficulties (Everyday Cognition Scale; ECOG), and disability (World Health Organization Disability Assessment Scale [WHODAS]-II-Short). Given a non-normal distribution of ECOG and WHODAS-II scores, non-parametric Wilcoxon-Mann-Whitney tests were used to assess group differences in perceived cognitive dysfunction and disability. A regression model assessed the extent to which perceived cognitive dysfunction was associated with hoarding symptom severity measured continuously, covarying for age, education, gender, and depression severity. A separate regression model assessed the extent to which disability scores were associated with perceived cognitive dysfunction and HB severity covarying for demographics and depression severity.
Results:
LLD+HB endorsed significantly greater perceived cognitive dysfunction (W = 1023, p = 0.003) and greater disability (W = 1006, p = < 0.001) compared to LLD. Regression models accounting for demographic characteristics and depression severity revealed that greater HB severity was associated with greater perceived cognitive dysfunction (β = 0.009, t = 2.765, p = 0.007). Increased disability was associated with greater perceived cognitive dysfunction (β = 4.792, t(71) = 3.551, p = 0.0007) and HB severity (β = 0.080, t(71) = 1.944, p = 0.056) approached significance after accounting for variance explained by depression severity and demographic covariates.
Conclusions:
Our results suggest that hoarding behaviors are associated with increased perceived cognitive dysfunction and greater disability in individuals with LLD. Screening for HB in individuals with LLD may help identify those at greater risk for poor cognitive and functional outcomes. Interventions that target HB and perceived cognitive difficulties may decrease risk for disability in LLD. However, longitudinal studies would be required to further evaluate these relationships.
Late Life Major Depressive Disorder (LLD) and Hoarding Disorder (HD) are common in older adults with prevalence estimates up to 29% and 7%, respectively. Both LLD and HD are characterized by executive dysfunction and disability. There is evidence of overlapping neurobiological dysfunction in LLD and HD suggesting potential for compounded executive dysfunction and disability in the context of comorbid HD and LLD. Yet, prevalence of HD in primary presenting LLD has not been examined and potential compounded impact on executive functioning, disability, and treatment response remains unknown. Thus, the present study aimed to determine the prevalence of co-occurring HD in primary presenting LLD and examine hoarding symptom severity as a contributor to executive dysfunction, disability, and response to treatment for LLD.
Participants and Methods:
Eighty-three adults ages 65-90 participating in a psychotherapy study for LLD completed measures of hoarding symptom severity (Savings Inventory-Revised: SI-R), executive functioning (WAIS-IV Digit Span, Letter-Number Sequencing, Coding; Stroop Interference; Trail Making Test-Part B; Letter Fluency), functional ability (World Health Organization Disability Assessment Schedule-II-Short), and depression severity (Hamilton Depression Rating Scale) at post-treatment. Pearson's Chi-squared tests evaluated group differences in cognitive and functional impairment rates and depression treatment response between participants with (HD+LLD) and without (LLD-only) clinically significant hoarding symptoms. Linear regressions were used to examine the association between hoarding symptom severity and executive function performance and functional ability and included as covariates participant age, years of education, gender, and concurrent depression severity.
Results:
At post-treatment, 24.1% (20/83) of participants with LLD met criteria for clinically significant hoarding symptoms (SI-R.41). Relative to LLD-only, the LLD+HD group demonstrated greater impairment rates in Letter-Number Sequencing (χ2(1)=4.0, p=.045) and Stroop Interference (χ2(1)=4.8, p=.028). Greater hoarding symptom severity was associated with poorer executive functioning performance on Digit Span (t(71)=-2.4, β=-0.07, p=.019), Letter-Number Sequencing (t(70)=-2.1, β=-0.05, p=.044), and Letter Fluency (t(71)=-2.8, β=-0.24, p=.006). Rates of functional impairment were significantly higher in the LLD+HD (88.0%) group compared to the LLD-only (62.3%) group, (χ2(1)=5.41, p=.020). Additionally, higher hoarding symptom severity was related to greater disability (t(72)=2.97, β=0.13, p=.004). Furthermore, depression treatment response rates were significantly lower in the LLD+HD group at 24.0% (6/25) compared to 48.3% (28/58) in the LLD-only group, χ2(1)=4.26, p=.039.
Conclusions:
The present study is among the first to report prevalence of clinically significant hoarding symptoms in primary presenting LLD. The findings of 24.1% co-occurrence of HD in primary presenting LLD and increased burden on executive functioning, disability, and depression treatment outcomes have important implications for intervention and prevention efforts. Hoarding symptoms are likely under-evaluated, and thus may be overlooked, in clinical settings where LLD is identified as the primary diagnosis. Taken together with results indicating poorer depression treatment response in LLD+HD, these findings underscore the need for increased screening of hoarding behaviors in LLD and tailored interventions for this LLD+HD group. Future work examining the course of hoarding symptomatology in LLD (e.g., onset age of hoarding behaviors) may provide insights into the mechanisms associated with greater executive dysfunction and disability.
The institutions (i.e., hubs) making up the National Institutes of Health (NIH)-funded network of Clinical and Translational Science Awards (CTSAs) share a mission to turn observations into interventions to improve public health. Recently, the focus of the CTSAs has turned increasingly from translational research (TR) to translational science (TS). The current NIH Funding Opportunity Announcement (PAR-21-293) for CTSAs stipulates that pilot studies funded through the CTSAs must be “focused on understanding a scientific or operational principle underlying a step of the translational process with the goal of developing generalizable solutions to accelerate translational research.” This new directive places Pilot Program administrators in the position of arbiters with the task of distinguishing between TR and TS projects. The purpose of this study was to explore the utility of a set of TS principles set forth by NCATS for distinguishing between TR and TS.
Methods:
Twelve CTSA hubs collaborated to generate a list of Translational Science Principles questions. Twenty-nine Pilot Program administrators used these questions to evaluate 26 CTSA-funded pilot studies.
Results:
Factor analysis yielded three factors: Generalizability/Efficiency, Disruptive Innovation, and Team Science. The Generalizability/Efficiency factor explained the largest amount of variance in the questions and was significantly able to distinguish between projects that were verified as TS or TR (t = 6.92, p < .001) by an expert panel.
Conclusions:
The seven questions in this factor may be useful for informing deliberations regarding whether a study addresses a question that aligns with NCATS’ vision of TS.
Background: Environmental contamination is a major risk factor for multidrug-resistant organism (MDRO) exposure and transmission in the healthcare setting. Sponge-stick sampling methods have been developed and validated for MDRO epidemiological investigations, leading to their recommendation by public health agencies. However, similar bacteriological yields with more readily available methods that require less processing time or specialized equipment have also been reported. We compared the ability of 4 sampling methods to recover a variety of MDRO taxa from a simulated contaminated surface. Methods: We assessed the ability of (1) cotton swabs moistened with phosphate buffer solution (PBS), (2) e-swabs moistened with e-swab solution, (3) cellulose-containing sponge sticks (CSS), and (4) non–cellulose-containing sponge sticks (NCS) to recover extended-spectrum β-lactamase (ESBL)–producing Escherichia coli, carbapenem-resistant Pseudomonas aeruginosa (CRPA), carbapenem-resistant Acinetobacter baumannii (CRAB), methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococcus faecium (VRE), and a mixture that contained VRE, MRSA, and ESBL organisms. A solution of known bacterial inoculum (~105 CFU/mL) was made for each MDRO. Then, 1 mL solution was pipetted on a stainless-steel surface (8 × 12 inch) in 5 µL dots and allowed to dry for 1 hour. All samples were collected by 1 individual to minimize variation in technique. Sponge sticks were expressed in PBS containing 0.02% Tween 80 using a stomacher, were centrifuged, and were then resuspended in PBS. Cotton and e-swabs were spun in a vortexer. Then, 1 mL of fluid from each method was plated to selective and nonselective media in duplicate and incubated at 35°C for 24 hours (MRSA plates, 48 hours) (Fig. 1). CFU per square inch and percentage recovery were calculated. Results: Table 1 shows the CFU per square inch and percentage recovery for each sampling method–MDRO taxa combination. The percentage recovery varied across MDRO taxa. Across all methods, the lowest rate of recovery was for CRPA and the highest was for VRE. Regardless of MDRO taxa, the percentage recovery was highest for the sponge stick (CSS and NCS) compared to swab (cotton and E-swab) methods across all taxa (Table 1 and Fig. 2).
Conclusions: These findings support the preferential use of sponge sticks for the recovery of MDROs from the healthcare environment, despite the additional processing and equipment time needed for sponge sticks. Further studies are needed to assess the robustness of these findings in noncontrived specimens as well as the comparative effectiveness of different sampling methods for non–culture-based MDRO detection.
OBJECTIVES/GOALS: The Translational Science Benefits Model (TSBM), developed at Washington University in St. Louis, was used to create a survey to collect group-level data on the real-world impacts of research. It was used with two cohorts of CTSA-supported pilot studies to compare the benefits of campus-community partnerships to campus-only projects. METHODS/STUDY POPULATION: Investigators from two funding streams were surveyed: a campus-based cohort (n=31), and a campus-community partnership cohort (n=6). All studies were related to COVID-19. The Translational Benefits Survey collected quantitative and qualitative data for each of the 30 TSBM benefits, in 4 benefit categories: clinical, community, economic and policy. Text provided by investigators to support each reported benefit was evaluated by two coders through a process that required coder consensus to verify a benefit as realized. Verified benefits were aggregated for each cohort, and the percentage of projects per cohort with realized clinical, community, economic and policy benefits were calculated. RESULTS/ANTICIPATED RESULTS: Campus-community partnerships did not realize any clinical benefits, whereas 26% of campus-based projects realized at least one clinical benefit. In contrast, campus-community partnerships were more likely to realize community health benefits (17% vs 10% of campus projects) and economic benefits (17% vs 13% of campus projects). We identified a substantial amount of self-reported benefits (64% across all categories) that were unable to be confirmed as realized using the provided text, which either described activities not relevant to the selected benefit, or lacked critical details needed to verify that the benefit was realized. DISCUSSION/SIGNIFICANCE: This project demonstrates that the TSBM can be utilized to collect group-level data and to compare cohorts’real-world benefits. It also illuminates the need to improve the process for verifying self-reported benefits. Sharing data on these real-world impacts has the potential to convey the strengths of translational science to the public.
Group Name: The Emory COVID-19 Quality and Clinical Research Collaborative
Background: Patients hospitalized with COVID-19 are at risk of secondary infections—10%–33% develop bacterial pneumonia and 2%–6% develop bloodstream infection (BSI). We conducted a retrospective cohort study to identify the prevalence, microbiology, and outcomes of secondary pneumonias and BSIs in patients hospitalized with COVID-19. Methods: Patients aged ≥18 years with a positive SARS-CoV-2 real-time polymerase chain reaction assay admitted to 4 academic hospitals in Atlanta, Georgia, between February 15 and May 16, 2020, were included. We extracted electronic medical record data through June 16, 2020. Microbiology tests were performed according to standard protocols. Possible ventilator-associated pneumonia (PVAP) was defined according to Centers for Disease Control and Prevention (CDC) criteria. We assessed in-hospital mortality, comparing patients with and without infections using the χ2 test. SAS University Edition software was used for data analyses. Results: In total, 774 patients were included (median age, 62 years; 49.7% female; 66.6% black). In total, 335 patients (43.3%) required intensive care unit (ICU) admission, 238 (30.7%) required mechanical ventilation, and 120 (15.5%) died. Among 238 intubated patients, 65 (27.3%) had a positive respiratory culture, including 15 with multiple potential pathogens, for a total of 84 potential pathogens. The most common organisms were Staphylococcus aureus (29 of 84; 34.5%), Pseudomonas aeruginosa (16 of 84; 19.0%), and Klebsiella spp (14 of 84; 16.7%). Mortality did not differ between intubated patients with and without a positive respiratory culture (41.5% vs 35.3%; P = .37). Also, 5 patients (2.1%) had a CDC-defined PVAP (1.7 PVAPs per 1,000 ventilator days); none of them died. Among 536 (69.3%) nonintubated patients, 2 (0.4%) had a positive Legionella urine antigen and 1 had a positive respiratory culture (for S. aureus). Of 774 patients, 36 (4.7%) had BSI, including 5 with polymicrobial BSI (42 isolates total). Most BSIs (24 of 36; 66.7%) had ICU onset. The most common organisms were S. aureus (7 of 42; 16.7%), Candida spp (7 of 42; 16.7%), and coagulase-negative staphylococci (5 of 42; 11.9%); 12 (28.6%) were gram-negative. The most common source was central-line–associated BSI (17 of 36; 47.2%), followed by skin (6 of 36; 16.7%), lungs (5 of 36; 13.9%), and urine (4 of 36; 11.1%). Mortality was 50% in patients with BSI versus 13.8% without (p < 0.0001). Conclusions: In a large cohort of patients hospitalized with COVID-19, secondary infections were rare: 2% bacterial pneumonia and 5% BSI. The risk factors for these infections (intubation and central lines, respectively) and causative pathogens reflect healthcare delivery and not a COVID-19–specific effect. Clinicians should adhere to standard best practices for preventing and empirically treating secondary infections in patients hospitalized with COVID-19.
Background: A team of infectious diseases physicians, infectious diseases pharmacists, clinical laboratorians, and researchers collaborated to assess the management of lower respiratory tract infections (LRTIs). In 1 sample from our institution, 96.1% of pneumonia cases were prescribed antibiotics, compared to 85.0% in a comparison group. A collaborative effort led to the development of a protocol for procalcitonin (PCT)-guided antibiotic prescribing that was approved by several hospital committees, including the Antimicrobial Stewardship Committee and the Healthcare Pharmacy & Therapeutics Committee in December 2020. The aim of this analysis was to develop baseline information on PCT ordering and antibiotic prescribing patterns in LRTIs. Methods: We evaluated all adult inpatients (March–September 2019 and 2020) with a primary diagnosis of LRTI who received at least 1 antibiotic. Two cohorts were established to observe any potential differences in the 2 most recent years prior to adoption of the PCT protocol. Data (eg, demographics, specific diagnosis, length of stay, antimicrobial therapy and duration, PCT labs, etc) were obtained from the UK Center for Clinical and Translational Science, and the study was approved by the local IRB. The primary outcome of interest was antibiotic duration; secondary outcomes of interest were PCT orders, discharge antibiotic prescription, and inpatient length of stay. Results: In total, 432 patients (277 in 2019 and 155 in 2020) were included in this analysis. The average patient age was 61.2 years (SD, ±13.7); 47.7% were female; and 86.1% were white. Most patients were primarily diagnosed with pneumonia (58.8%), followed by COPD with complication (40.5%). In-hospital mortality was 3.5%. The minority of patients had any orders for PCT (29.2%); among them, most had only 1 PCT level measured (84.1%). The median length of hospital stay was 4 days (IQR, 2–6), and the median duration of antibiotic therapy was 4 days (IQR, 3–6). Conclusions: The utilization of PCT in LRTIs occurs in the minority of patient cases at our institution and mostly as a single measurement. The development and implementation of a PCT-guided therapy could help optimize antibiotic usage in patients with LRTIs.
Background: Historically, metronidazole was first-line therapy for Clostridioides difficile infection (CDI). In February 2018, the Infectious Diseases Society of America (IDSA) and Society for Healthcare Epidemiology of America (SHEA) updated clinical practice guidelines for CDI. The new guidelines recommend oral vancomycin or fidaxomicin for treatment of initial episode of CDI in adults. We examined the changes in treatment of CDI during 2018 across all types of healthcare settings in metropolitan Atlanta. Methods: Cases were identified through the Georgia Emerging Infections program (funded by the Centers for Disease Control and Prevention), which conducts active population-based surveillance in an 8-county area including Atlanta, Georgia (population, 4,126,399). An incident case was a resident of the catchment area with a positive C. difficile toxin test and no additional positive test in the previous 8 weeks. Recurrent CDI was defined as >1 incident CDI episode in 1 year. Clinical and treatment data were abstracted on a random 33% sample of adult (>17 years) cases. Definitive treatment categories were defined as the single antibiotic agent, metronidazole or vancomycin, used to complete a course. We examined the effect of time of infection, location of treatment, and number of CDI episodes on the use of metronidazole only. Results: We analyzed treatment information for 831 adult sampled cases. Overall, cases were treated at 29 hospitals (568 cases), 4 nursing homes (6 cases), and 101 outpatient providers (257 cases). The mean age was 60 (IQR, 34–86), and 111 (13.4%) had recurrent infection. Moreover, ∼28% of first-incident CDI episodes, 8% of second episodes, and 6% of third episodes were treated with metronidazole only. Compared to facility-based providers, outpatient providers were more likely to treat initial CDI episodes with metronidazole only (44% vs 21%; relative risk [RR], 2.1; 95% CI, 1.7–2.7). Treatment changed over time from 56% metronidazole only in January to 10% in December (Fig. 1). First-incident cases in the first quarter of 2018 were more likely to be treated with metronidazole only compared to those in the fourth quarter (RR, 2.76; 95% CI, 1.91–3.97). Conclusions: Preferential use of vancomycin for initial CDI episodes increased throughout 2018 but remained <100%. CDI episodes treated in the outpatient setting and nonrecurrent episodes were more likely to be treated with metronidazole only. Additional studies on persistent barriers to prescribing oral vancomycin, such as cost, are warranted.
Funding: None
Disclosures: Scott Fridkin reports that his spouse receives a consulting fee from the vaccine industry.
Global pork production has largely adopted on-farm biosecurity to minimize vectors of disease transmission and protect swine health. Feed and ingredients were not originally thought to be substantial vectors, but recent incidents have demonstrated their ability to harbor disease. The objective of this paper is to review the potential role of swine feed as a disease vector and describe biosecurity measures that have been evaluated as a way of maintaining swine health. Recent research has demonstrated that viruses such as porcine epidemic diarrhea virus and African Swine Fever Virus can survive conditions of transboundary shipment in soybean meal, lysine, and complete feed, and contaminated feed can cause animal illness. Recent research has focused on potential methods of preventing feed-based pathogens from infecting pigs, including prevention of entry to the feed system, mitigation by thermal processing, or decontamination by chemical additives. Strategies have been designed to understand the spread of pathogens throughout the feed manufacturing environment, including potential batch-to-batch carryover, thus reducing transmission risk. In summary, the focus on feed biosecurity in recent years is warranted, but additional research is needed to further understand the risk and identify cost-effective approaches to maintain feed biosecurity as a way of protecting swine health.
A growing body of research suggests that deficient emotional self-regulation (DESR) is common and morbid among attention-deficit/hyperactivity disorder (ADHD) patients. The main aim of the present study was to assess whether high and low levels of DESR in adult ADHD patients can be operationalized and whether they are clinically useful.
Methods.
A total of 441 newly referred 18- to 55-year-old adults of both sexes with Diagnostic and Statistical Manual of Mental Disorders: Fifth Edition (DSM-5) ADHD completed self-reported rating scales. We operationalized DESR using items from the Barkley Current Behavior Scale. We used receiver operator characteristic (ROC) curves to identify the optimal cut-off on the Barkley Emotional Dysregulation (ED) Scale to categorize patients as having high- versus low-level DESR and compared demographic and clinical characteristics between the groups.
Results.
We averaged the optimal Barkley ED Scale cut-points from the ROC curve analyses across all subscales and categorized ADHD patients as having high- (N = 191) or low-level (N = 250) DESR (total Barkley ED Scale score ≥8 or <8, respectively). Those with high-level DESR had significantly more severe symptoms of ADHD, executive dysfunction, autistic traits, levels of psychopathology, and worse quality of life compared with those with low-level DESR. There were no major differences in outcomes among medicated and unmedicated patients.
Conclusions.
High levels of DESR are common in adults with ADHD and when present represent a burdensome source of added morbidity and disability worthy of further clinical and scientific attention.
The delegates at the Montgomery convention selected Jefferson Davis as Confederate president in part because they expected war and believed Davis would make a good commander in chief. An 1828 West Point graduate and seven-year veteran of the regular US Army, Davis had served as colonel of a Mississippi volunteer regiment in the Mexican–American War, gaining accolades for his role at the battles of Monterrey and Buena Vista. During the 1850s he served as secretary of war under President Franklin Pierce and later as chairman of the Senate’s Committee on Military Affairs. There was scarcely anyone in America with better qualifications for leading a nation in wartime.
Dietary phosphorus concentration greatly affects pig’s growth performance, environmental impact and diet cost. A total of 1080 pigs (initially 5.9 ± 1.08 kg) from three commercial research rooms were used to determine the effects of increasing standardized total tract digestible (STTD) P concentrations in diets without and with phytase on growth performance and percentage bone ash. Pens (10 pigs/pen, 9 pens/treatment) were balanced for equal weights and randomly allotted to 12 treatments. Treatments were arranged in two dose titrations (without or with 2000 units of phytase) with six levels of STTD P each. The STTD P levels were expressed as a percentage of NRC (2012) requirement estimates (% of NRC; 0.45 and 0.40% for phases 1 and 2, respectively) and were: 80%, 90%, 100%, 110%, 125% and 140% of NRC in diets without phytase and 100%, 110%, 125%, 140%, 155% and 170% of NRC in diets with phytase. Diets were provided in three phases, with experimental diets fed during phases 1 (days 0 to 11) and 2 (days 11 to 25), followed by a common diet from days 25 to 46. On day 25, radius samples from one median-weight gilt per pen were collected for analysis of bone ash. During the treatment period, increasing STTD P from 80% to 140% of NRC in diets without phytase improved average daily gain (ADG; quadratic, P < 0.01), average daily feed intake (ADFI; quadratic, P < 0.05) and gain–feed ratio (G : F; linear, P < 0.01). Estimated STTD P requirement in diets without phytase was 117% and 91% of NRC for maximum ADG according to quadratic polynomial (QP) and broken-line linear (BLL) models, respectively, and was 102%, 119% and >140% of NRC for maximum G : F using BLL, broken-line quadratic and linear models, respectively. When diets contained phytase, increasing STTD P from 100% to 170% of NRC improved ADG (quadratic, P < 0.05) and G : F (linear, P < 0.01). Estimated STTD P requirement in diets containing phytase was 138% for maximum ADG (QP), and 147% (QP) and 116% (BLL) of NRC for maximum G : F. Increasing STTD P increased (linear, P < 0.01) the percentage bone ash regardless of phytase addition. When comparing diets containing the same STTD P levels, phytase increased (P < 0.01) ADG, ADFI and G : F. In summary, estimated STTD P requirements varied depending on the response criteria and statistical models and ranged from 91% to >140% of NRC (0.41% to >0.63% of phase 1 diet and 0.36% to >0.56% of phase 2 diet) in diets without phytase, and from 116% to >170% of NRC (0.52% to >0.77% of phase 1 diet and 0.46% to >0.68% of phase 2 diet) for diets containing phytase. Phytase exerted an extra-phosphoric effect on promoting pig’s growth and improved the P dose-responses for ADG and G : F.
Environmental regulations as well as economic incentives have resulted in greater use of synthetic amino acids in swine diets. Tryptophan is typically the second limiting amino acid in corn-soybean meal-based diets. However, using corn-based co-products emphasizes the need to evaluate the pig’s response to increasing Trp concentrations. Therefore, the objective of these studies was to evaluate the dose–response to increasing standardized ileal digestible (SID) Trp : Lys on growth performance of growing-finishing gilts housed under large-scale commercial conditions. Dietary treatments consisted of SID Trp : Lys of 14.5%, 16.5%, 18.0%, 19.5%, 21.0%, 22.5% and 24.5%. The study was conducted in four experiments of 21 days of duration each, and used corn-soybean meal-based diets with 30% distillers dried grains with solubles. A total of 1166, 1099, 1132 and 975 gilts (PIC 337×1050, initially 29.9±2.0 kg, 55.5±4.8 kg, 71.2±3.4 kg and 106.2±3.1 kg BW, mean±SD) were used. Within each experiment, pens of gilts were blocked by BW and assigned to one of the seven dietary treatments and six pens per treatment with 20 to 28 gilts/pen. First, generalized linear mixed models were fit to data from each experiment to characterize performance. Next, data were modeled across experiments and fit competing dose–response linear and non-linear models and estimate SID Trp : Lys break points or maximums for performance. Competing models included broken-line linear (BLL), broken-line quadratic and quadratic polynomial (QP). For average daily gain (ADG), increasing the SID Trp : Lys increased growth rate in a quadratic manner (P<0.02) in all experiments except for Exp 2, for which the increase was linear (P<0.001). Increasing SID Trp : Lys increased (P<0.05) feed efficiency (G : F) quadratically in Exp 1, 3 and 4. For, ADG the QP was the best fitting dose–response model and the estimated maximum mean ADG was obtained at a 23.5% (95% confidence interval (CI): [22.7, 24.3%]) SID Trp : Lys. For maximum G : F, the BLL dose–response models had the best fit and estimated the SID Trp : Lys minimum to maximize G : F at 16.9 (95% CI: [16.0, 17.8%]). Thus, the estimated SID Trp : Lys for 30 to 125 kg gilts ranged from a minimum of 16.9% for maximum G : F to 23.5% for maximum ADG.
Floor space allowance for pigs has substantial effects on pig growth and welfare. Data from 30 papers examining the influence of floor space allowance on the growth of finishing pigs was used in a meta-analysis to develop alternative prediction equations for average daily gain (ADG), average daily feed intake (ADFI) and gain : feed ratio (G : F). Treatment means were compiled in a database that contained 30 papers for ADG and 28 papers for ADFI and G : F. The predictor variables evaluated were floor space (m2/pig), k (floor space/final BW0.67), Initial BW, Final BW, feed space (pigs per feeder hole), water space (pigs per waterer), group size (pigs per pen), gender, floor type and study length (d). Multivariable general linear mixed model regression equations were used. Floor space treatments within each experiment were the observational and experimental unit. The optimum equations to predict ADG, ADFI and G : F were: ADG, g=337.57+(16 468×k)−(237 350×k2)−(3.1209×initial BW (kg))+(2.569×final BW (kg))+(71.6918×k×initial BW (kg)); ADFI, g=833.41+(24 785×k)−(388 998×k2)−(3.0027×initial BW (kg))+(11.246×final BW (kg))+(187.61×k×initial BW (kg)); G : F=predicted ADG/predicted ADFI. Overall, the meta-analysis indicates that BW is an important predictor of ADG and ADFI even after computing the constant coefficient k, which utilizes final BW in its calculation. This suggests including initial and final BW improves the prediction over using k as a predictor alone. In addition, the analysis also indicated that G : F of finishing pigs is influenced by floor space allowance, whereas individual studies have concluded variable results.
Outcrops of pebbly mud (diamict) at Scarborough in Southern Ontario, Canada (the so-called Sunnybrook ‘Till’) are associated with the earliest incursion of the Laurentide Ice Sheet (LIS) into mid-continent North America some 45,000 years ago. The Sunnybrook is a blanket-like deposit containing deepwater ostracodes and occurs conformably within a thick (100 m) succession of deltaic and glaciolacustrine facies that record water depth changes in a large proglacial lake. Contextual evidence (associated facies, sedimentary structures, deposit geometry and landforms) indicates a low energy depositional setting in an ice-dammed ancestral Lake Ontario in which scouring by floating ice masses was an important process. U-shaped, iceberg-cut scours (with lateral berms) up to 7 m deep, occur on the upper surface of the Sunnybrook and are underlain by ‘sub-scour’ structures that extend several meters below the scour base. Ice-rafted concentrations of clasts (‘clast layers’), grooved surfaces formed by floating ice glissading over a muddy lake floor (‘soft sediment striations’) and melanges of sand and mud mixed by grounding ice keels (‘ice keel turbates’) are present and are all well known from modern cold environments. The wider significance of this depositional model is that the LIS margin lay east of Scarborough and did not overrun Southern Ontario. This finding is in agreement with recent data from the Erie Basin of Canada, Ohio, and Indiana where deposits formerly correlated with the Sunnybrook (and thus implying an extensive early Wisconsin ice sheet) are now regarded as Illinoian. A speculative hypothesis is proposed that relates deposition of the Sunnybrook and two younger deposits of similar sedimentology, to surge-like instabilities of the southern LIS margin.