We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Validate a public health model identifying patients at high risk for carbapenem-resistant Enterobacterales (CRE) on admission and evaluate performance across a healthcare network.
Design:
Retrospective case-control studies
Participants:
Adults hospitalized with a clinical CRE culture within 3 days of admission (cases) and those hospitalized without a CRE culture (controls).
Methods:
Using public health data from Atlanta, GA (1/1/2016–9/1/2019), we validated a CRE prediction model created in Chicago. We then closely replicated this model using clinical data from a healthcare network in Atlanta (1/1/2015–12/31/2021) (“Public Health Model”) and optimized performance by adding variables from the healthcare system (“Healthcare System Model”). We frequency-matched cases and controls based on year and facility. We evaluated model performance in validation datasets using area under the curve (AUC).
Results:
Using public health data, we matched 181 cases to 764,408 controls, and the Chicago model performed well (AUC 0.85). Using clinical data, we matched 91 cases to 384,013 controls. The Public Health Model included age, prior infection diagnosis, number of and mean length of stays in acute care hospitalizations (ACH) in the prior year. The final Healthcare System Model added Elixhauser score, antibiotic days of therapy in prior year, diabetes, admission to the intensive care unit in prior year and removed prior number of ACH. The AUC increased from 0.68 to 0.73.
Conclusions:
A CRE risk prediction model using prior healthcare exposures performed well in a geographically distinct area and in an academic healthcare network. Adding variables from healthcare networks improved model performance.
Among inpatients, peer-comparison of prescribing metrics is challenging due to variation in patient-mix and prescribing by multiple providers daily. We established risk-adjusted provider-specific antibiotic prescribing metrics to allow peer-comparisons among hospitalists.
Methods:
Using clinical and billing data from inpatient encounters discharged from the Hospital Medicine Service between January 2020 through June 2021 at four acute care hospitals, we calculated bimonthly (every two months) days of therapy (DOT) for antibiotics attributed to specific providers based on patient billing dates. Ten patient-mix characteristics, including demographics, infectious disease diagnoses, and noninfectious comorbidities were considered as potential predictors of antibiotic prescribing. Using linear mixed models, we identified risk-adjusted models predicting the prescribing of three antibiotic groups: broad spectrum hospital-onset (BSHO), broad-spectrum community-acquired (BSCA), and anti-methicillin-resistant Staphylococcus aureus (Anti-MRSA) antibiotics. Provider-specific observed-to-expected ratios (OERs) were calculated to describe provider-level antibiotic prescribing trends over time.
Results:
Predictors of antibiotic prescribing varied for the three antibiotic groups across the four hospitals, commonly selected predictors included sepsis, COVID-19, pneumonia, urinary tract infection, malignancy, and age >65 years. OERs varied within each hospital, with medians of approximately 1 and a 75th percentile of approximately 1.25. The median OER demonstrated a downward trend for the Anti-MRSA group at two hospitals but remained relatively stable elsewhere. Instances of heightened antibiotic prescribing (OER >1.25) were identified in approximately 25% of the observed time-points across all four hospitals.
Conclusion:
Our findings indicate provider-specific benchmarking among inpatient providers is achievable and has potential utility as a valuable tool for inpatient stewardship efforts.
High-precision pulsar timing observations are limited in their accuracy by the jitter noise that appears in the arrival time of pulses. Therefore, it is important to systematically characterise the amplitude of the jitter noise and its variation with frequency. In this paper, we provide jitter measurements from low-frequency wideband observations of PSR J0437$-$4715 using data obtained as part of the Indian Pulsar Timing Array experiment. We were able to detect jitter in both the 300–500 MHz and 1 260–1 460 MHz observations of the upgraded Giant Metrewave Radio Telescope (uGMRT). The former is the first jitter measurement for this pulsar below 700 MHz, and the latter is in good agreement with results from previous studies. In addition, at 300–500 MHz, we investigated the frequency dependence of the jitter by calculating the jitter for each sub-banded arrival time of pulses. We found that the jitter amplitude increases with frequency. This trend is opposite as compared to previous studies, indicating that there is a turnover at intermediate frequencies. It will be possible to investigate this in more detail with uGMRT observations at 550–750 MHz and future high-sensitive wideband observations from next generation telescopes, such as the Square Kilometre Array. We also explored the effect of jitter on the high precision dispersion measure (DM) measurements derived from short duration observations. We find that even though the DM precision will be better at lower frequencies due to the smaller amplitude of jitter noise, it will limit the DM precision for high signal-to-noise observations, which are of short durations. This limitation can be overcome by integrating for a long enough duration optimised for a given pulsar.
Prior evidence suggests that men and women might be differentially susceptible to distinct types of childhood adversity (CA), but research on gender-specific associations between CA subtypes and psychiatric symptoms is limited.
Objectives
To test the gender-specific associations of CA subtypes and psychiatric symptoms in the general population.
Methods
Data from 791 twins and siblings from the TwinssCan project were used. Psychopathology and CA exposure were assessed using the Symptom Checklist-90 Revised (SCL-90) and the Childhood Trauma Questionnaire (CTQ), respectively. The associations between the total CTQ scores and SCL-90 scores (i.e. total SCL-90, psychoticism, paranoid ideation, anxiety, depression, somatization, obsessive-compulsive, interpersonal sensitivity, hostility, and phobic anxiety) were tested in men and women separately. The associations between the five CA subtypes (i.e. physical abuse, emotional abuse, sexual abuse, physical neglect, and emotional neglect) and total SCL-90 were tested in a mutually adjusted model. As exploratory analyses, the associations between all CA subtypes and the nine SCL-90 subdomain scores were similarly tested. The regression coefficients between men and women were compared using Chow’s test. All models were adjusted for age and family structure.
Results
Total CTQ was significantly associated with total SCL-90 in men (B = 0.013, SE = 0.003, P < .001) and women (B = 0.011, SE = 0.002, P < .001). The associations with the nine symptom domains were also significant in both genders (P < .001). No significant gender differences in the regression coefficients of total CTQ were detected. The analyses of CA subtypes showed a significant association between emotional abuse and total SCL-90 in women (B = 0.173, SE = 0.030, P < .001) and men (B = 0.080, SE = 0.035, P = .023), but the association was significantly stronger in women (ꭓ2(1) = 4.10, P = .043). The association of sexual abuse and total SCL-90 was only significant in women (B = 0.217, SE = 0.053, P < .001). The associations of emotional neglect (B = 0.061, SE = 0.027, P = .026) and physical neglect (B = 0.167, SE = 0.043, P < .001) with total SCL-90 were only significant in men. The explorative analyses of SCL-90 subdomains revealed significant associations of emotional abuse with all nine symptom domains and of sexual abuse with seven symptom domains in women. Significant associations of physical neglect with six symptom domains and of emotional neglect with depression were also detected in men. No other significant associations between CT subtypes and total SCL-90 or symptom domain scores were observed in men and women.
Conclusions
CA exposure was associated with diverse psychopathology similarly in both genders. However, women are more sensitive to abuse, but men are more sensitive to neglect. Gender-specific influences of CA subtypes on psychopathology should be considered in future studies.
Over the past 2 decades, several categorizations have been proposed for the abnormalities of the aortic root. These schemes have mostly been devoid of input from specialists of congenital cardiac disease. The aim of this review is to provide a classification, from the perspective of these specialists, based on an understanding of normal and abnormal morphogenesis and anatomy, with emphasis placed on the features of clinical and surgical relevance. We contend that the description of the congenitally malformed aortic root is simplified when approached in a fashion that recognizes the normal root to be made up of 3 leaflets, supported by their own sinuses, with the sinuses themselves separated by the interleaflet triangles. The malformed root, usually found in the setting of 3 sinuses, can also be found with 2 sinuses, and very rarely with 4 sinuses. This permits description of trisinuate, bisinuate, and quadrisinuate variants, respectively. This feature then provides the basis for classification of the anatomical and functional number of leaflets present. By offering standardized terms and definitions, we submit that our classification will be suitable for those working in all cardiac specialties, whether pediatric or adult. It is of equal value in the settings of acquired or congenital cardiac disease. Our recommendations will serve to amend and/or add to the existing International Paediatric and Congenital Cardiac Code, along with the Eleventh iteration of the International Classification of Diseases provided by the World Health Organization.
To determine the incidence of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection among healthcare personnel (HCP) and to assess occupational risks for SARS-CoV-2 infection.
Design:
Prospective cohort of healthcare personnel (HCP) followed for 6 months from May through December 2020.
Setting:
Large academic healthcare system including 4 hospitals and affiliated clinics in Atlanta, Georgia.
Participants:
HCP, including those with and without direct patient-care activities, working during the coronavirus disease 2019 (COVID-19) pandemic.
Methods:
Incident SARS-CoV-2 infections were determined through serologic testing for SARS-CoV-2 IgG at enrollment, at 3 months, and at 6 months. HCP completed monthly surveys regarding occupational activities. Multivariable logistic regression was used to identify occupational factors that increased the risk of SARS-CoV-2 infection.
Results:
Of the 304 evaluable HCP that were seronegative at enrollment, 26 (9%) seroconverted for SARS-CoV-2 IgG by 6 months. Overall, 219 participants (73%) self-identified as White race, 119 (40%) were nurses, and 121 (40%) worked on inpatient medical-surgical floors. In a multivariable analysis, HCP who identified as Black race were more likely to seroconvert than HCP who identified as White (odds ratio, 4.5; 95% confidence interval, 1.3–14.2). Increased risk for SARS-CoV-2 infection was not identified for any occupational activity, including spending >50% of a typical shift at a patient’s bedside, working in a COVID-19 unit, or performing or being present for aerosol-generating procedures (AGPs).
Conclusions:
In our study cohort of HCP working in an academic healthcare system, <10% had evidence of SARS-CoV-2 infection over 6 months. No specific occupational activities were identified as increasing risk for SARS-CoV-2 infection.
To assess preventability of hospital-onset bacteremia and fungemia (HOB), we developed and evaluated a structured rating guide accounting for intrinsic patient and extrinsic healthcare-related risks.
Design:
HOB preventability rating guide was compared against a reference standard expert panel.
Participants:
A 10-member panel of clinical experts was assembled as the standard of preventability assessment, and 2 physician reviewers applied the rating guide for comparison.
Methods:
The expert panel independently rated 82 hypothetical HOB scenarios using a 6-point Likert scale collapsed into 3 categories: preventable, uncertain, or not preventable. Consensus was defined as concurrence on the same category among ≥70% experts. Scenarios without consensus were deliberated and followed by a second round of rating.
Two reviewers independently applied the rating guide to adjudicate the same 82 scenarios in 2 rounds, with interim revisions. Interrater reliability was evaluated using the κ (kappa) statistic.
Results:
Expert panel consensus criteria were met for 52 scenarios (63%) after 2 rounds.
After 2 rounds, guide-based rating matched expert panel consensus in 40 of 52 (77%) and 39 of 52 (75%) cases for reviewers 1 and 2, respectively. Agreement rates between the 2 reviewers were 84% overall (κ, 0.76; 95% confidence interval [CI], 0.64–0.88]) and 87% (κ, 0.79; 95% CI, 0.65–0.94) for the 52 scenarios with expert consensus.
Conclusions:
Preventability ratings of HOB scenarios by 2 reviewers using a rating guide matched expert consensus in most cases with moderately high interreviewer reliability. Although diversity of expert opinions and uncertainty of preventability merit further exploration, this is a step toward standardized assessment of HOB preventability.
To determine the impact of an inpatient stewardship intervention targeting fluoroquinolone use on inpatient and postdischarge Clostridioides difficile infection (CDI).
Design:
We used an interrupted time series study design to evaluate the rate of hospital-onset CDI (HO-CDI), postdischarge CDI (PD-CDI) within 12 weeks, and inpatient fluoroquinolone use from 2 years prior to 1 year after a stewardship intervention.
Setting:
An academic healthcare system with 4 hospitals.
Patients:
All inpatients hospitalized between January 2017 and September 2020, excluding those discharged from locations caring for oncology, bone marrow transplant, or solid-organ transplant patients.
Intervention:
Introduction of electronic order sets designed to reduce inpatient fluoroquinolone prescribing.
Results:
Among 163,117 admissions, there were 683 cases of HO-CDI and 1,104 cases of PD-CDI. In the context of a 2% month-to-month decline starting in the preintervention period (P < .01), we observed a reduction in fluoroquinolone days of therapy per 1,000 patient days of 21% after the intervention (level change, P < .05). HO-CDI rates were stable throughout the study period. In contrast, we also detected a change in the trend of PD-CDI rates from a stable monthly rate in the preintervention period to a monthly decrease of 2.5% in the postintervention period (P < .01).
Conclusions:
Our systemwide intervention reduced inpatient fluoroquinolone use immediately, but not HO-CDI. However, a downward trend in PD-CDI occurred. Relying on outcome measures limited to the inpatient setting may not reflect the full impact of inpatient stewardship efforts.
To determine patient-specific risk factors and clinical outcomes associated with contaminated blood cultures.
Design:
A single-center, retrospective case-control risk factor and clinical outcome analysis performed on inpatients with blood cultures collected in the emergency department, 2014–2018. Patients with contaminated blood cultures (cases) were compared to patients with negative blood cultures (controls).
Setting:
A 509-bed tertiary-care university hospital.
Methods:
Risk factors independently associated with blood-culture contamination were determined using multivariable logistic regression. The impacts of contamination on clinical outcomes were assessed using linear regression, logistic regression, and generalized linear model with γ log link.
Results:
Of 13,782 blood cultures, 1,504 (10.9%) true positives were excluded, leaving 1,012 (7.3%) cases and 11,266 (81.7%) controls. The following factors were independently associated with blood-culture contamination: increasing age (adjusted odds ratio [aOR], 1.01; 95% confidence interval [CI], 1.01–1.01), black race (aOR, 1.32; 95% CI, 1.15–1.51), increased body mass index (BMI; aOR, 1.01; 95% CI, 1.00–1.02), chronic obstructive pulmonary disease (aOR, 1.16; 95% CI, 1.02–1.33), paralysis (aOR 1.64; 95% CI, 1.26–2.14) and sepsis plus shock (aOR, 1.26; 95% CI, 1.07–1.49). After controlling for age, race, BMI, and sepsis, blood-culture contamination increased length of stay (LOS; β = 1.24 ± 0.24; P < .0001), length of antibiotic treatment (LOT; β = 1.01 ± 0.20; P < .001), hospital charges (β = 0.22 ± 0.03; P < .0001), acute kidney injury (AKI; aOR, 1.60; 95% CI, 1.40–1.83), echocardiogram orders (aOR, 1.51; 95% CI, 1.30–1.75) and in-hospital mortality (aOR, 1.69; 95% CI, 1.31–2.16).
Conclusions:
These unique risk factors identify high-risk individuals for blood-culture contamination. After controlling for confounders, contamination significantly increased LOS, LOT, hospital charges, AKI, echocardiograms, and in-hospital mortality.
Among 353 healthcare personnel in a longitudinal cohort in 4 hospitals in Atlanta, Georgia (May–June 2020), 23 (6.5%) had severe acute respiratory coronavirus virus 2 (SARS-CoV-2) antibodies. Spending >50% of a typical shift at the bedside (OR, 3.4; 95% CI, 1.2–10.5) and black race (OR, 8.4; 95% CI, 2.7–27.4) were associated with SARS-CoV-2 seropositivity.
In a survey of hospitals and of patients with Clostridioides difficile infection (CDI), we found that most facilities had educational materials or protocols for education of CDI patients. However, approximately half of CDI patients did not recall receiving education during their admission, and knowledge deficits regarding CDI prevention were common.
Introduction: Emergency care serves as an important health resource for First Nations (FN) persons. Previous reporting shows that FN persons visit emergency departments at almost double the rate of non-FN persons. Working collaboratively with FN partners, academic researchers and health authority staff, the objective of this study is to investigate FN emergency care patient visit statistics in Alberta over a five year period. Methods: Through a population-based retrospective cohort study for the period from April 1, 2012 to March 31, 2017, patient demographics and emergency care visit characteristics for status FN patients in Alberta were analyzed and compared to non-FN statistics. Frequencies and percentages (%) describe patients and visits by categorical variables (e.g., Canadian Triage Acuity Scale (CTAS)). Means and standard deviations (medians and interquartile ranges (IQR)) describe continuous variables (e.g., distances) as appropriate for the data distribution. These descriptions are repeated for the FN and non-FN populations, separately. Results: The data set contains 11,686,288 emergency facility visits by 3,024,491 unique persons. FN people make up 4.8% of unique patients and 9.4% of emergency care visits. FN persons live further from emergency facilities than their non-FN counterparts (FN median 6 km, IQR 1-24; vs. non-FN median 4 km, IQR 2-8). FN visits arrive more often by ground ambulance (15.3% vs. 10%). FN visits are more commonly triaged as less acute (59% CTAS levels 4 and 5, compared to non-FN 50.4%). More FN visits end in leaving without completing treatment (6.7% vs. 3.6%). FN visits are more often in the evening – 4:01pm to 12:00am (43.6% vs. 38.1%). Conclusion: In a collaborative validation session, FN Elders and health directors contextualized emergency care presentation in evenings and receiving less acute triage scores as related to difficulties accessing primary care. They explained presentation in evenings, arrival by ambulance, and leaving without completing treatment in terms of issues accessing transport to and from emergency facilities. Many factors interact to determine FN patients’ emergency care visit characteristics and outcomes. Further research needs to separate the impact of FN identity from factors such as reasons for visiting emergency facilities, distance traveled to care, and the size of facility where care is provided.
Attention Deficit Hyperactivity Disorder (ADHD) is a serious risk factor for co-occurring psychiatric disorders and negative psychosocial consequences in adulthood. Given this background, there is great need for an effective treatment of adult ADHD patients.
Therefore, our research group has conducted a first controlled randomized multicenter study on the evaluation of disorder-tailored DBT-based group program in adult ADHD compared to a psychophar-macological treatment.
Between 2007 and 2010, in a four-arm-design 433 patients were randomized to a manualized dialectical behavioural therapy (DBT) based group program plus methylphenidate or placebo or clinical management plus methylphenidate or placebo with weekly sessions in the first twelve weeks and monthly sessions thereafter. Therapists are graduated psychologists or physicians. Treatment integrity is established by independent supervision. Primary endpoint (ADHD symptoms measured by the Conners Adult ADHD Rating Scale) is rated by interviewers blind to the treatment allocation (Current Controlled Trials ISRCTN54096201). The trial is funded by the German Federal Ministry of Research and Education (01GV0606) and is part of the German network for the treatment of ADHD in children and adults (ADHD-NET). In the lecture the first data of our interim analysis are presented (baseline data, results of treatment compliance and adherence).
There is lack of Cameroonian adult neuropsychological (NP) norms, limited knowledge concerning HIV-associated neurocognitive disorders in Sub-Saharan Africa, and evidence of differential inflammation and disease progression based on viral subtypes. In this study, we developed demographically corrected norms and assessed HIV and viral genotypes effects on attention/working memory (WM), learning, and memory.
Method:
We administered two tests of attention/WM [Paced Auditory Serial Addition Test (PASAT)-50, Wechsler Memory Scale (WMS)-III Spatial Span] and two tests of learning and memory [Brief Visuospatial Memory Test-Revised (BVMT-R), Hopkins Verbal Learning Test-Revised (HVLT-R)] to 347 HIV+ and 395 seronegative adult Cameroonians. We assessed the effects of viral factors on neurocognitive performance.
Results:
Compared to controls, people living with HIV (PLWH) had significantly lower T-scores on PASAT-50 and attention/WM summary scores, on HVLT-R total learning and learning summary scores, on HVLT-R delayed recall, BVMT-R delayed recall and memory summary scores. More PLWH had impairment in attention/WM, learning, and memory. Antiretroviral therapy (ART) and current immune status had no effect on T-scores. Compared to untreated cases with detectable viremia, untreated cases with undetectable viremia had significantly lower (worse) T-scores on BVMT-R total learning, BVMT-R delayed recall, and memory composite scores. Compared to PLWH infected with other subtypes (41.83%), those infected with HIV-1 CRF02_AG (58.17%) had higher (better) attention/WM T-scores.
Conclusions:
PLWH in Cameroon have impaired attention/WM, learning, and memory and those infected with CRF02_AG viruses showed reduced deficits in attention/WM. The first adult normative standards for assessing attention/WM, learning, and memory described, with equations for computing demographically adjusted T-scores, will facilitate future studies of diseases affecting cognitive function in Cameroonians.
To determine the effect of an electronic medical record (EMR) nudge at reducing total and inappropriate orders testing for hospital-onset Clostridioides difficile infection (HO-CDI).
Design:
An interrupted time series analysis of HO-CDI orders 2 years before and 2 years after the implementation of an EMR intervention designed to reduce inappropriate HO-CDI testing. Orders for C. difficile testing were considered inappropriate if the patient had received a laxative or stool softener in the previous 24 hours.
Setting:
Four hospitals in an academic healthcare network.
Patients:
All patients with a C. difficile order after hospital day 3.
Intervention:
Orders for C. difficile testing in patients administered a laxative or stool softener in <24 hours triggered an EMR alert defaulting to cancellation of the order (“nudge”).
Results:
Of the 17,694 HO-CDI orders, 7% were inappropriate (8% prentervention vs 6% postintervention; P < .001). Monthly HO-CDI orders decreased by 21% postintervention (level-change rate ratio [RR], 0.79; 95% confidence interval [CI], 0.73–0.86), and the rate continued to decrease (postintervention trend change RR, 0.99; 95% CI, 0.98–1.00). The intervention was not associated with a level change in inappropriate HO-CDI orders (RR, 0.80; 95% CI, 0.61–1.05), but the postintervention inappropriate order rate decreased over time (RR, 0.95; 95% CI, 0.93–0.97).
Conclusion:
An EMR nudge to minimize inappropriate ordering for C. difficile was effective at reducing HO-CDI orders, and likely contributed to decreasing the inappropriate HO-CDI order rate after the intervention.
The Murchison Widefield Array (MWA) is an open access telescope dedicated to studying the low-frequency (80–300 MHz) southern sky. Since beginning operations in mid-2013, the MWA has opened a new observational window in the southern hemisphere enabling many science areas. The driving science objectives of the original design were to observe 21 cm radiation from the Epoch of Reionisation (EoR), explore the radio time domain, perform Galactic and extragalactic surveys, and monitor solar, heliospheric, and ionospheric phenomena. All together $60+$ programs recorded 20 000 h producing 146 papers to date. In 2016, the telescope underwent a major upgrade resulting in alternating compact and extended configurations. Other upgrades, including digital back-ends and a rapid-response triggering system, have been developed since the original array was commissioned. In this paper, we review the major results from the prior operation of the MWA and then discuss the new science paths enabled by the improved capabilities. We group these science opportunities by the four original science themes but also include ideas for directions outside these categories.
Weed control in corn traditionally has relied on atrazine as a foundational tool to control problematic weeds. However, the recent discovery of atrazine in aquifers and other water sources increases the likelihood of more strict restrictions on its use. Field-based research trials to find atrazine alternatives were conducted in 2017 and 2018 in Fayetteville, AR, by testing the tolerance of corn to PRE and POST applications of different photosystem II (PSII) inhibitors alone or in combination with mesotrione or S-metolachlor. All experiments were designed as a two-factor factorial, randomized complete block, with the two factors being (1) PSII-inhibiting herbicide and (2) the herbicide added to create the mixture. The PSII-inhibiting herbicides were prometryn, ametryn, simazine, fluometuron, metribuzin, linuron, diuron, atrazine, and propazine. The second factor consisted of either no additional herbicide, S-metolachlor, or mesotrione. Treatments were applied immediately after planting in the PRE experiments and to 30-cm–tall corn for the POST experiments. For the PRE study, low levels of injury (<15%) were observed at 14 and 28 d after application and corn height was negatively affected by the PSII-inhibiting herbicide applied. PRE-applied fluometuron- and ametryn-containing treatments consistently caused injury to corn, often exceeding 5%. Because of low injury levels caused by all treatments, crop density and yield did not differ from that of the nontreated plants. For the POST study, crop injury, relative height, and relative yield were affected by PSII-inhibiting herbicide and the herbicide added. Ametryn-, diuron-, linuron-, propazine-, and prometryn-containing treatments caused at least 25% injury to corn in at least 1 site-year. All PSII-inhibiting herbicides, except metribuzin and simazine when applied alone, caused yield loss in corn when compared with atrazine alone. Diuron-, linuron-, metribuzin-, and simazine-containing treatments applied PRE and metribuzin- and simazine-containing treatments applied POST should be investigated further as atrazine replacements.