We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Ultrasound-guided wire localisation may improve intra-operative identification and outcomes of non-palpable cervical lymphadenopathy in a previously treated neck. We undertook a literature search and present our case series to determine the safety and efficacy of ultrasound-guided wire localisation.
Methods
A search of databases up to 29 April 2024 was performed. At our tertiary centre, ultrasound-guided wire localisation was utilised for 20 patients with cervical lymphadenopathy between February 2021 and April 2024.
Results
Seventeen studies with a combined total of 92 patients were identified, with one complication reported. Within our case series, all 20 patients had accurate lesion localisation using ultrasound-guided wire localisation and none required repeat operations.
Conclusion
Ultrasound-guided wire localisation is a safe and cost-effective technique for lesions in an otherwise difficult area to operate, providing confidence to the multidisciplinary team, particularly where histopathology indicates benignity. Surgical outcomes do not appear worse than outcomes without ultrasound-guided wire localisation. We advocate its use provided appropriate patient selection is considered.
Foliar-applied postemergence applications of glufosinate are often applied to glufosinate-resistant crops to provide nonselective weed control without significant crop injury. Rainfall, air temperature, solar radiation, and relative humidity near the time of application have been reported to affect glufosinate efficacy. However, previous research may have not captured the full range of weather variability to which glufosinate may be exposed before or following application. Additionally, climate models suggest more extreme weather will become the norm, further expanding the weather range to which glufosinate can be exposed. The objective of this research was to quantify the probability of successful weed control (efficacy ≥85%) with glufosinate applied to some key weed species across a broad range of weather conditions. A database of >10,000 North American herbicide evaluation trials was used in this study. The database was filtered to include treatments with a single postemergence application of glufosinate applied to waterhemp [Amaranthus tuberculatus (Moq.) Sauer], morningglory species (Ipomoea spp.), and/or giant foxtail (Setaria faberi Herrm.) <15 cm in height. These species were chosen because they are well represented in the database and listed as common and troublesome weed species in both corn (Zea mays L.) and soybean [Glycine max (L.) Merr.] (Van Wychen 2020, 2022). Individual random forest models were created. Low rainfall (≤20 mm) over the 5 d before glufosinate application was detrimental to the probability of successful control of A. tuberculatus and S. faberi. Lower relative humidity (≤70%) and solar radiation (≤23 MJ m−1 d−1) on the day of application reduced the probability of successful weed control in most cases. Additionally, the probability of successful control decreased for all species when average air temperature over the first 5 d after application was ≤25 C. As climate continues to change and become more variable, the risk of unacceptable control of several common species with glufosinate is likely to increase.
Foliar-applied postemergence herbicides are a critical component of corn (Zea mays L.) and soybean [Glycine max (L.) Merr.] weed management programs in North America. Rainfall and air temperature around the time of application may affect the efficacy of herbicides applied postemergence in corn or soybean production fields. However, previous research utilized a limited number of site-years and may not capture the range of rainfall and air temperatures that these herbicides are exposed to throughout North America. The objective of this research was to model the probability of achieving successful weed control (≥85%) with commonly applied postemergence herbicides across a broad range of environments. A large database of more than 10,000 individual herbicide evaluation field trials conducted throughout North America was used in this study. The database was filtered to include only trials with a single postemergence application of fomesafen, glyphosate, mesotrione, or fomesafen + glyphosate. Waterhemp [Amaranthus tuberculatus (Moq.) Sauer], morningglory species (Ipomoea spp.), and giant foxtail (Setaria faberi Herrm.) were the weeds of focus. Separate random forest models were created for each weed species by herbicide combination. The probability of successful weed control deteriorated when the average air temperature within the first 10 d after application was <19 or >25 C for most of the herbicide by weed species models. Additionally, drier conditions before postemergence herbicide application reduced the probability of successful control for several of the herbicide by weed species models. As air temperatures increase and rainfall becomes more variable, weed control with many of the commonly used postemergence herbicides is likely to become less reliable.
Flowering rush (Butomus umbellatus L.) is an emergent perennial monocot that has invaded aquatic systems along the U.S.–Canadian border. Currently, there are two known cytotypes of flowering rush, diploid and triploid, within the invaded range. Although most studies have focused on the triploid cytotype, little information is known about diploid plants. Therefore, phenology and resource allocation were studied on the diploid cytotype of flowering rush in three study sites (Mentor Marsh, OH; Tonawanda Wildlife Management Area, NY; and Unity Island, NY) to understand seasonal resource allocation and environmental influences on growth, and to optimize management strategies. Samples were harvested once a month from May to November at each site from 2021 to 2023. Plant metrics were regressed to air temperature, water temperature, and water depth. Aboveground biomass peaked from July to September and comprised 50% to 70% of total biomass. Rhizome biomass peaked from September to November and comprised 40% to 50% of total biomass. Rhizome bulbil densities peaked from September to November at 3,000 to 16,000 rhizome bulbils m−2. Regression analysis resulted in strong negative relationships between rhizome starch content and air temperature (r2 = 0.52) and water temperature (r2 = 46). Other significant, though weak, relationships were found, including a positive relationship between aboveground biomass and air temperature (r2 = 0.17), a negative relationship between rhizome bulbil biomass and air temperature (r2 = 0.18) and a positive relationship between leaf density and air temperature (r2 = 0.17). Rhizomes and rhizome bulbils combined stored up to 60% of total starch, and therefore, present a unique challenge to management, as these structures cannot be reached directly with herbicides. Therefore, management should target the aboveground tissue before peak production (July) to reduce internal starch storage and aim to limit regrowth over several years.
Cereal rye (Secale cereale L.) cover crop and preemergence herbicides are important components of an integrated weed management program for waterhemp [Amaranthus tuberculatus (Moq.) Sauer] and Palmer amaranth (Amaranthus palmeri S. Watson) management in soybean [Glycine max (L.) Merr.]. Accumulating adequate cereal rye biomass for effective suppression of Amaranthus spp. can be challenging in the upper Midwest due to the short window for cereal rye growth in a corn–soybean rotation. Farmers are adopting the planting green system to optimize cereal rye biomass production and weed suppression. This study aimed to evaluate the feasibility of planting soybean green when integrated with preemergence herbicides for the control of Amaranthus spp. under two soybean planting time frames. The study was conducted across 19 site-years in the United States over the 2021 and 2022 growing seasons. Factors included cover crop management practices (“no-till,” “cereal rye early-term,” and “cereal rye plant-green”), soybean planting times (“early” and “late”), and use of preemergence herbicides (“NO PRE” and “YES PRE”). Planting soybean green increased cereal rye biomass production by 33% compared with early termination. Greater cereal rye biomass production when planting green provided a 44% reduction in Amaranthus spp. density compared with no-till. The use of preemergence herbicides also resulted in a 68% reduction in Amaranthus spp. density compared with NO PRE. Greater cereal rye biomass produced when planting green reduced soybean stand, which directly reduced soybean yield in some site-years. Planting soybean green is a feasible management practice to optimize cereal rye biomass production, which, combined with preemergence herbicides, provided effective Amaranthus spp. management. Soybean stand was a key factor in maintaining soybean yields compared with no-till when planting green. Farmers should follow best management recommendations for proper planter and equipment setup to ensure effective soybean establishment under high levels of cereal rye biomass when planting green.
Informal carers (unpaid family members and friends), are critical to millions worldwide for the ongoing delivery of health and well-being needs. However, the physical and mental wellbeing of caregivers is often poor including low levels of physical activity, frequently owed to contributing factors such as lack of time, lack of support and motivation. Thus, accessible evidence-based tools to facilitate physical activity for carers are urgently needed.
Objective:
The aim of this study was to co-design and develop a novel mobile app to educate and support carers in the undertaking of regular physical activity. This is achieved via integration of the transtheoretical model of behaviour change and UK physical activity guidelines across 8 weeks of use.
Methods:
We co-designed a mobile app, “CareFit,” by directly involving caregivers, health care professionals, and social care professionals in the requirements, capturing, and evaluation phases across a number of Agile Scrum development sprints. Requirements for CareFit were grounded in a combination of behavioural change science and UK government physical activity guidelines.
Results:
Participants identified different barriers and enablers to physical activity, such as a lack of time, recognition of existing activities, and concerns regarding safely undertaking physical activity. Requirements analysis highlighted the importance of simplicity in design and a need to anchor development around the everyday needs of caregivers (eg, easy-to-use video instructions, reducing text). Our final prototype app integrated guidance for undertaking physical activity at home through educational, physical activity, and communication components.
Conclusions:
Integrating government guidelines with models of behavioural change into a mobile app to support the physical activity of carers is novel and holds future promise. Integrating core physical activity guidelines into a co-designed smartphone app with functionality such as a weekly planner and educational material for users is feasible acceptable and usable. Here we will document the latest developments on the project including an ongoing national study currently taking place in Scotland to test the prototype with 50 carers.
The deleterious effects of adversity are likely intergenerational, such that one generation’s adverse experiences can affect the next. Epidemiological studies link maternal adversity to offspring depression and anxiety, possibly via transmission mechanisms that influence offspring fronto-limbic connectivity. However, studies have not thoroughly disassociated postnatal exposure effects nor considered the role of offspring sex. We utilized infant neuroimaging to test the hypothesis that maternal childhood maltreatment (CM) would be associated with increased fronto-limbic connectivity in infancy and tested brain-behavior associations in childhood. Ninety-two dyads participated (32 mothers with CM, 60 without; 52 infant females, 40 infant males). Women reported on their experiences of CM and non-sedated sleeping infants underwent MRIs at 2.44 ± 2.74 weeks. Brain volumes were estimated via structural MRI and white matter structural connectivity (fiber counts) via diffusion MRI with probabilistic tractography. A subset of parents (n = 36) reported on children’s behaviors at age 5.17 ± 1.73 years. Males in the maltreatment group demonstrated greater intra-hemispheric fronto-limbic connectivity (b = 0.96, p= 0.008, [95%CI 0.25, 1.66]), no differences emerged for females. Fronto-limbic connectivity was related to somatic complaints in childhood only for males (r = 0.673, p = 0.006). Our findings suggest that CM could have intergenerational associations to offspring brain development, yet mechanistic studies are needed.
The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery (WCPCCS) will be held in Washington DC, USA, from Saturday, 26 August, 2023 to Friday, 1 September, 2023, inclusive. The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery will be the largest and most comprehensive scientific meeting dedicated to paediatric and congenital cardiac care ever held. At the time of the writing of this manuscript, The Eighth World Congress of Pediatric Cardiology and Cardiac Surgery has 5,037 registered attendees (and rising) from 117 countries, a truly diverse and international faculty of over 925 individuals from 89 countries, over 2,000 individual abstracts and poster presenters from 101 countries, and a Best Abstract Competition featuring 153 oral abstracts from 34 countries. For information about the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery, please visit the following website: [www.WCPCCS2023.org]. The purpose of this manuscript is to review the activities related to global health and advocacy that will occur at the Eighth World Congress of Pediatric Cardiology and Cardiac Surgery.
Acknowledging the need for urgent change, we wanted to take the opportunity to bring a common voice to the global community and issue the Washington DC WCPCCS Call to Action on Addressing the Global Burden of Pediatric and Congenital Heart Diseases. A copy of this Washington DC WCPCCS Call to Action is provided in the Appendix of this manuscript. This Washington DC WCPCCS Call to Action is an initiative aimed at increasing awareness of the global burden, promoting the development of sustainable care systems, and improving access to high quality and equitable healthcare for children with heart disease as well as adults with congenital heart disease worldwide.
Children with congenital heart disease (CHD) can face neurodevelopmental, psychological, and behavioural difficulties beginning in infancy and continuing through adulthood. Despite overall improvements in medical care and a growing focus on neurodevelopmental screening and evaluation in recent years, neurodevelopmental disabilities, delays, and deficits remain a concern. The Cardiac Neurodevelopmental Outcome Collaborative was founded in 2016 with the goal of improving neurodevelopmental outcomes for individuals with CHD and pediatric heart disease. This paper describes the establishment of a centralised clinical data registry to standardize data collection across member institutions of the Cardiac Neurodevelopmental Outcome Collaborative. The goal of this registry is to foster collaboration for large, multi-centre research and quality improvement initiatives that will benefit individuals and families with CHD and improve their quality of life. We describe the components of the registry, initial research projects proposed using data from the registry, and lessons learned in the development of the registry.
We describe the association between job roles and coronavirus disease 2019 (COVID-19) among healthcare personnel. A wide range of hazard ratios were observed across job roles. Medical assistants had higher hazard ratios than nurses, while attending physicians, food service workers, laboratory technicians, pharmacists, residents and fellows, and temporary workers had lower hazard ratios.
To compare the accuracy of monitoring personal protective equipment (PPE) donning and doffing process between an artificial intelligent (AI) machine collaborated with remote human buddy support system and an onsite buddy, and to determine the degree of AI autonomy at the current development stage.
Design and setting:
We conducted a pilot simulation study with 30 procedural scenarios (15 donning and 15 doffing, performed by one individual) incorporating random errors in 55 steps. In total, 195 steps were assessed.
Methods:
The human–AI machine system and the onsite buddy assessed the procedures independently. The human–AI machine system performed the assessment via a tablet device, which was positioned to allow full-body visualization of the donning and doffing person.
Results:
The overall accuracy of PPE monitoring using the human–AI machine system was 100% and the overall accuracy of the onsite buddy was 99%. There was a very good agreement between the 2 methods (κ coefficient, 0.97). The current version of the AI technology was able to perform autonomously, without the remote human buddy’s rectification in 173 (89%) of 195 steps. It identified 67.3% of all the errors independently.
Conclusions:
This study provides preliminary evidence suggesting that a human–AI machine system may be able to serve as a substitute or enhancement to an onsite buddy performing the PPE monitoring task. It provides practical assistance using a combination of a computer mirror, visual prompts, and verbal commands. However, further studies are required to examine its clinical efficacy with a diverse range of individuals performing the donning and doffing procedures.
We describe COVID-19 cases among nonphysician healthcare personnel (HCP) by work location. The proportion of HCP with coronavirus disease 2019 (COVID-19) was highest in the emergency department and lowest among those working remotely. COVID-19 and non–COVID-19 units had similar proportions of HCP with COVID-19 (13%). Cases decreased across all work locations following COVID-19 vaccination.
Background: The effectiveness of PPE in preventing self-contamination of healthcare workers (HCWs) and transmission of pathogens (airborne and contact) in the emergency department (ED) is highly dependent on consistent, appropriate use of and other interactions (eg, storing, cleaning, etc) with the PPE. Pre–COVID-19 studies focused primarily on individual HCW contributions to incorrect or suboptimal PPE use. We conducted an analysis of ED video recordings using a human-factors engineering framework (ie, The Systems Engineering Initiative for Patient Safety, SEIPS), to identify work-system–level contributions to inappropriate PPE usage by HCWs while they provide care in their actual clinical care environment. Methods: In total, 47 video sessions (each ~15 minute) were recorded between June 2020 and May 2021 using a GoPro camera in an 8-bed pod area, designated for persons under investigation (PUI) and confirmed COVID-19–positive patients, in an ED of a large, tertiary-care, academic medical center. These recordings captured a ‘landscape view’: 2 video cameras were set up to capture the entire ED pod area and HCWs as they provided care. A team with hemorrhagic fever expertise, infection prevention and control expertise, and ED expertise reviewed each video together and extracted data using a semistructured form. Results: Guided by the 5 components of the SEIPS work system model, (ie, task, physical environment, person, organization, tools and technology), multiple work system failure points influencing HCWs appropriate use of PPE were identified. For example, under the task component, HCWs were observed not doffing and donning in recommended sequence. Also, inconsistencies with COVID-19 status signage on a patient’s door and ambiguous labelling of work areas designated as clean (donning) and dirty (doffing) sites acted as a barrier to appropriate PPE use under the physical environment section. Conclusions: Human factors–based analysis of video recordings of actual ED work identified a variety of work system factors that impede appropriate or correct use of PPE by HCWs. Future efforts to improve appropriate PPE use should focus on eliminating or mitigating the effects of these work system factors.
Funding: US CDC
Disclosures: The authors gratefully acknowledge the CDC for funding this work. This material is based upon work supported by the Naval Sea Systems Command (under contract No. N00024-13-D-6400, Task Order NH076). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Naval Sea Systems Command (NAVSEA) or the US CDC.
Background: Whether working on COVID-19 designated units put healthcare workers (HCWs) at higher risk of acquiring COVID-19 is not fully understood. We report trends of COVID-19 incidence among nonphysician HCWs and the association between the risk of acquiring COVID-19 and work location in the hospital. Methods: The University of Iowa Hospitals & Clinics (UIHC) is an 811-bed, academic medical center serving as a referral center for Iowa. We retrospectively collected COVID-19–associated data for nonphysician HCWs from Employee Health Clinic between June 1st 2020 and July 31th 2021. The data we abstracted included age, sex, job title, working location, history of COVID-19, and date of positive COVID-19 test if they had a history of COVID-19. We excluded HCWs who did not have a designated working location and those who worked on multiple units during the same shift (eg, medicine resident, hospitalist, etc) to assess the association between COVID-19 infections and working location. Job titles were divided into the following 5 categories: (1) nurse, (2) medical assistant (MA), (3) technician, (4) clerk, and (5) others (eg patient access, billing office, etc). Working locations were divided into the following 6 categories: (1) emergency department (ED), (2) COVID-19 unit, (3) non–COVID-19 unit, (4) Clinic, (5) perioperative units, and (6) remote work. Results: We identified 6,971 HCWs with work locations recorded. During the study period, 758 HCWs (10.8%) reported being diagnosed with COVID-19. Of these 758 COVID-19 cases, 658 (86.8%) were diagnosed before vaccines became available. The location with the highest COVID-19 incidence was the ED (17%), followed by both COVID-19 and non–COVID-19 units (12.7%), clinics (11.0%), perioperative units (9.4%) and remote work stations (6.6%, p Conclusions: Strict and special infection control strategies may be needed for HCWs in the ED, especially where vaccine uptake is low. The administrative control of HCWs working remotely may be associated with a lower incidence of COVID-19. Given that the difference in COVID-19 incidence among HCWs by location was lower and comparable after the availability of COVID-19 vaccines, facilities should make COVID-19 vaccination mandatory as a condition of employment for all HCWs, especially in areas where the COVID-19 incidence is high.
Background: The IDSA has a clinical definition for catheter-related bloodstream infection (CRBSI) that requires ≥1 set of blood cultures from the catheter and ≥1 set from a peripheral vein. However, because blood cultures obtained from a central line may represent contamination rather than true infection, many institutions discourage blood cultures from central lines. We describe blood culture ordering practices in patients with a central line. Methods: The University of Iowa Hospitals & Clinics is an academic medical center with 860 hospital beds. We retrospectively collected data for blood cultures obtained from adult patients (aged ≥18 years) in the emergency department or an inpatient unit during 2020. We focused on the first blood cultures obtained during each admission because they are usually obtained before antibiotic initiation and are the most important opportunity to diagnose bacteremia. We classified blood-culture orders as follows: CRBSI workup, non-CRBSI sepsis workup, or incomplete workup. We defined CRBSI workup as ≥1 blood culture from a central line and ≥1 peripheral blood culture (IDSA guidelines). We defined non-CRBSI sepsis workup as ≥2 peripheral blood cultures without cultures from a central line because providers might have suspected secondary bacteremia rather than CRBSI. We defined incomplete workup as any order that did not meet the CRBSI or non-CRBSI sepsis workup. This occurred when only 1 peripheral culture was obtained or when ≥1 central-line culture was obtained without peripheral cultures. Results: We included 1,150 patient admissions with 4,071 blood cultures. In total, 349 patient admissions with blood culture orders (30.4%) met CRBSI workup. 62.8% were deemed non-CRBSI sepsis workup, and 6.9% were deemed an incomplete workup. Stratified by location, ICUs had the highest percentage of orders with incomplete workups (8.8%), followed by wards (7.2%) and the emergency department (5.1%). In total, 204 patient admissions had ≥1 positive blood culture (17.7%). The most frequently isolated organisms were Staphylococcus epidermidis (n = 33, 16.2%), Staphylococcus aureus (n = 16, 7.8%), and Escherichia coli (n = 15, 7.4%) Conclusions: Analysis of blood culture data allowed us to identify units at our institute that were underperforming in terms of ordering the necessary blood cultures to diagnose CRBSI. Being familiar with CRBSI guidelines as well as decreasing inappropriate ordering will help lead to early and proper diagnosis of CRBSI which can reduce its morbidity, mortality, and cost.
We analyzed blood-culture practices to characterize the utilization of the Infectious Diseases Society of America (IDSA) recommendations related to catheter-related bloodstream infection (CRBSI) blood cultures. Most patients with a central line had only peripheral blood cultures. Increasing the utilization of CRBSI guidelines may improve clinical care, but may also affect other quality metrics.
This paper provides a large-scale, per Major League Baseball (MLB) game analysis of foul ball (FB) injury data and provides estimates of injury frequency and severity.
Objective:
This study’s goal was to quantify and describe the rate and type of FB injuries at MLB games.
Design:
This was a retrospective review of medical care reports for patients evaluated by on-site health care providers (HCPs) over a non-contiguous 11-year period (2005-2016). Data were obtained using Freedom of Information Act (FOIA) requests.
Setting:
Data were received from three US-based MLB stadiums.
Results:
The review reported 0.42-0.55 FB injuries per game that were serious enough to warrant presentation at a first aid center. This translated to a patients per 10,000 fans rate (PPTT) of 0.13-0.23. The transport to hospital rate (TTHR) was 0.02-0.39. Frequently, FB injuries required analgesics but were overwhelmingly minor and occurred less often than non-FB traumatic injuries (5.2% versus 42%-49%). However, FB injured fans were more likely to need higher levels of care and transport to hospital (TH) as compared to people suffering other traumatic injuries at the ballpark. Contusions or head injuries were common. Finally, FB injured fans were often hit in the abdomen, upper extremity, face, or head. It was found that FB injuries appeared to increase with time, and this increase in injuries aligns with the sudden increase in popularity of smartphones in the United States.
Conclusions and Relevance:
These data suggest that in roughly every two or three MLB games, a foul ball causes a serious enough injury that a fan seeks medical attention. This rate is high enough to warrant attention, but is comparable in frequency to other diagnostic categories. Assessing the risk to fans from FBs remains difficult, but with access to uniform data, researchers could answer persistent questions that would lead to actionable changes and help guide public policy towards safer stadiums.
Seed retention, and ultimately seed shatter, are extremely important for the efficacy of harvest weed seed control (HWSC) and are likely influenced by various agroecological and environmental factors. Field studies investigated seed-shattering phenology of 22 weed species across three soybean [Glycine max (L.) Merr.]-producing regions in the United States. We further evaluated the potential drivers of seed shatter in terms of weather conditions, growing degree days, and plant biomass. Based on the results, weather conditions had no consistent impact on weed seed shatter. However, there was a positive correlation between individual weed plant biomass and delayed weed seed–shattering rates during harvest. This work demonstrates that HWSC can potentially reduce weed seedbank inputs of plants that have escaped early-season management practices and retained seed through harvest. However, smaller individuals of plants within the same population that shatter seed before harvest pose a risk of escaping early-season management and HWSC.
Background: COVID-19 in hospitalized patients may be the result of community acquisition or in-hospital transmission. Molecular epidemiology can help confirm hospital COVID-19 transmission and outbreaks. We describe large COVID-19 clusters identified in our hospital and apply molecular epidemiology to confirm outbreaks. Methods: The University of Iowa Hospitals and Clinics is an 811-bed academic medical center. We identified large clusters involving patients with hospital onset COVID-19 detected during March–October 2020. Large clusters included ≥10 individuals (patients, visitors, or HCWs) with a laboratory confirmed COVID-19 diagnosis (RT-PCR) and an epidemiologic link. Epidemiologic links were defined as hospitalization, work, or visiting in the same unit during the incubation or infectious period for the index case. Hospital onset was defined as a COVID-19 diagnosis ≥14 days from admission date. Admission screening has been conducted since May 2020 and serial testing (every 5 days) since July 2020. Nasopharyngeal swab specimens were retrieved for viral whole-genome sequencing (WGS). Cluster patients with a pairwise difference in ≤5 mutations were considered part of an outbreak. WGS was performed using Oxford Nanopore Technology and protocols from the ARTIC network. Results: We identified 2 large clusters involving patients with hospital-onset COVID-19. Cluster 1: 2 hospital-onset cases were identified in a medical-surgical unit in June 2020. Source and contact tracing revealed 4 additional patients, 1 visitor, and 13 employees with COVID-19. Median age for patients was 62 (range, 38–79), and all were male. In total, 17 samples (6 patients, 1 visitor, and 10 HCWs) were available for WGS. Cluster 2: A hospital-onset case was identified via serial testing in a non–COVID-19 intensive care unit in September 2020. Source investigation, contact tracing, and serial testing revealed 3 additional patients, and 8 HCWs. One HCW also had a community exposure. Patient median age was 60 years (range, 48–68) and all were male. In total, 11 samples (4 patients and 7 HCWs) were sequenced. Using WGS, cluster 1 was confirmed to be an outbreak: WGS showed 0–5 mutations in between samples. Cluster 2 was also an outbreak: WGS showed less diversity (0–3 mutations) and ruled out the HCW with a community exposure (20 mutations of difference). Conclusion: Whole-genome sequencing confirmed the outbreaks identified using classic epidemiologic methods. Serial testing allowed for early outbreak detection. Early outbreak detection and implementation of control measures may decrease outbreak size and genetic diversity.