We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Alcohol causes more harm than any other substance. Despite this, a large majority of patients with alcohol use disorder go untreated. As emergency medicine providers, we are uniquely positioned to bridge this treatment gap. As such, the observation unit (OU) can be an effective site to manage the consequences of alcohol use disorder (AUD) and initiate treatment. This initiation in the emergency department OU has shown to be more effective than a simple referral. OU management may involve OU pathways for the treatment of mild alcohol withdrawal and alcohol intoxication. The OU allows time for initiation of treatment for the AUD including medications (e.g. naltrexone or acamprosate).
To quantify the impact of patient- and unit-level risk adjustment on infant hospital-onset bacteremia (HOB) standardized infection ratio (SIR) ranking.
Design:
A retrospective, multicenter cohort study.
Setting and participants:
Infants admitted to 284 neonatal intensive care units (NICUs) in the United States between 2016 and 2021.
Methods:
Expected HOB rates and SIRs were calculated using four adjustment strategies: birthweight (model 1), birthweight and postnatal age (model 2), birthweight and NICU complexity (model 3), and birthweight, postnatal age, and NICU complexity (model 4). Sites were ranked according to the unadjusted HOB rate, and these rankings were compared to rankings based on the four adjusted SIR models.
Results:
Compared to unadjusted HOB rate ranking (smallest to largest), the number and proportion of NICUs that left the fourth quartile (worst-performing) following adjustments were as follows: adjusted for birthweight (16, 22.5%), birthweight and postnatal age (19, 26.8%), birthweight and NICU complexity (22, 31.0%), birthweight, postnatal age and NICU complexity (23, 32.4%). Comparing NICUs that moved into the better-performing quartiles after birthweight adjustment to those that remained in the better-performing quartiles regardless of adjustment, the median percentage of low birthweight infants was 17.1% (Interquartile Range (IQR): 15.8, 19.2) vs 8.7% (IQR: 4.8, 12.6); and the median percentage of infants who died was 2.2% (IQR: 1.8, 3.1) vs 0.5% (IQR: 0.01, 12.0), respectively.
Conclusion:
Adjusting for patient and unit-level complexity moved one-third of NICUs in the worst-performing quartile into a better-performing quartile. Risk adjustment may allow for a more accurate comparison across units with varying levels of patient acuity and complexity.
COVID-19 changed the epidemiology of community-acquired respiratory viruses. We explored patterns of respiratory viral testing to understand which tests are most clinically useful in the postpandemic era.
Methods:
We conducted a retrospective observational study of discharge data from PINC-AI (formerly Premier), a large administrative database. Use of multiplex nucleic acid amplification respiratory panels in acute care, including small (2–5 targets), medium (6–11), and large panels (>11), were compared between the early pandemic (03/2020–10/2020), late pandemic (11/2020–4/2021), and prepandemic respiratory season (11/2019 - 02/2020) using ANOVA.
Results:
A median of 160.5 facilities contributed testing data per quarter (IQR 155.5–169.5). Prepandemic, facilities averaged 103 respiratory panels monthly (sd 138), including 79 large (sd 126), 7 medium (sd 31), and 16 small panels (sd 73). Relative to prepandemic, utilization decreased during the early pandemic (62 panels monthly/facility; sd 112) but returned to the prepandemic baseline by the late pandemic (107 panels monthly/facility; sd 211). Relative to prepandemic, late pandemic testing involved more small panel use (58 monthly/facility, sd 156) and less large panel use (47 monthly/facility, sd 116). Comparisons among periods demonstrated significant differences in overall testing (P < 0.0001), large panel use (P < 0.0001), and small panel use (P < 0.0001).
Conclusions:
Postpandemic, clinical use of respiratory panel testing shifted from predominantly large panels to predominantly small panels. Factors driving this change may include resource availability, costs, and the clinical utility of targeting important pathogenic viruses instead of testing “for everything.”
Although the link between alcohol involvement and behavioral phenotypes (e.g. impulsivity, negative affect, executive function [EF]) is well-established, the directionality of these associations, specificity to stages of alcohol involvement, and extent of shared genetic liability remain unclear. We estimate longitudinal associations between transitions among alcohol milestones, behavioral phenotypes, and indices of genetic risk.
Methods
Data came from the Collaborative Study on the Genetics of Alcoholism (n = 3681; ages 11–36). Alcohol transitions (first: drink, intoxication, alcohol use disorder [AUD] symptom, AUD diagnosis), internalizing, and externalizing phenotypes came from the Semi-Structured Assessment for the Genetics of Alcoholism. EF was measured with the Tower of London and Visual Span Tasks. Polygenic scores (PGS) were computed for alcohol-related and behavioral phenotypes. Cox models estimated associations among PGS, behavior, and alcohol milestones.
Results
Externalizing phenotypes (e.g. conduct disorder symptoms) were associated with future initiation and drinking problems (hazard ratio (HR)⩾1.16). Internalizing (e.g. social anxiety) was associated with hazards for progression from first drink to severe AUD (HR⩾1.55). Initiation and AUD were associated with increased hazards for later depressive symptoms and suicidal ideation (HR⩾1.38), and initiation was associated with increased hazards for future conduct symptoms (HR = 1.60). EF was not associated with alcohol transitions. Drinks per week PGS was linked with increased hazards for alcohol transitions (HR⩾1.06). Problematic alcohol use PGS increased hazards for suicidal ideation (HR = 1.20).
Conclusions
Behavioral markers of addiction vulnerability precede and follow alcohol transitions, highlighting dynamic, bidirectional relationships between behavior and emerging addiction.
Diagnostic stewardship is increasingly recognized as a powerful tool to improve patient safety. Given the close relationship between diagnostic testing and antimicrobial misuse, antimicrobial stewardship (AMS) pharmacists should be key members of the diagnostic team. Pharmacists practicing in AMS already frequently engage with clinicians to improve the diagnostic process and have many skills needed for the implementation of diagnostic stewardship initiatives. As diagnostic stewardship becomes more broadly used, all infectious disease clinicians, including pharmacists, must collaborate to optimize patient care.
One of the largest remnants of tropical dry forest is the South American Gran Chaco. A quarter of this biome is in Paraguay, but there have been few studies in the Paraguayan Chaco. The Gran Chaco flora is diverse in structure, function, composition and phenology. Fundamental ecological questions remain in this biome, such as what bioclimatic factors shape the Chaco’s composition, structure and phenology. In this study, we integrated forest inventories from permanent plots with monthly high-resolution NDVI from PlanetScope and historical climate data from WorldClim to identify bioclimatic predictors of forest structure, composition and phenology. We found that bioclimatic variables related to precipitation were correlated with stem density and Pielou evenness index, while temperature-related variables correlated with basal area. The best predictor of forest phenology (NDVI variation) was precipitation lagged by 1 month followed by temperature lagged by 2 months. In the period with most water stress, the phenological response correlates with diversity, height and basal area, showing links with dominance and tree size. Our results indicate that even if the ecology and function of Dry Chaco Forest is characterised by water limitation, temperature has a moderating effect by limiting growth and influencing leaf flush and deciduousness.
Unsupervised remote digital cognitive assessment makes frequent testing feasible and allows for measurement of learning across days on participants’ own devices. More rapid detection of diminished learning may provide a potentially valuable metric that is sensitive to cognitive change over short intervals. In this study we examine feasibility and predictive validity of a novel digital assessment that measures learning of the same material over 7 days in older adults.
Participants and Methods:
The Boston Remote Assessment for Neurocognitive Health (BRANCH) (Papp et al., 2021) is a web-based assessment administered over 7 consecutive days repeating the same stimuli each day to capture multi-day-learning slopes. The assessment includes Face-Name (verbal-visual associative memory), Groceries-Prices (numeric-visual associative memory), and Digits-Signs (speeded processing of numeric-visual associations). Our sample consisted of200 cognitively unimpaired older adults enrolled in ongoing observational studies (mean age=74.5, 63% female, 87% Caucasian, mean education=16.6) who completed the tasks daily, at home, on their own digital devices. Participants had previously completed in-clinic paper-and-pencil tests to compute a Preclinical Alzheimer’s Cognitive Composite (PACC-5). Mixed-effects models controlling for age, sex, and education were used to observe the associations between PACC-5 scores and both initial performance and multi-day learning on the three BRANCH measures.
Results:
Adherence was high with 96% of participants completing all seven days of consecutive assessment; demographic factors were not associated with differences in adherence. Younger participants had higher Day 1 scores all three measures, and learning slopes on Digit-Sign. Female participants performed better on Face-Name (T=3.35, p<.001) and Groceries-Prices (T=2.00, p=0.04) on Day 1 but no sex differences were seen in learning slopes; there were no sex differences on Digit-Sign. Black participants had lower Day 1 scores on Face-Name (T=-3.34, p=0.003) and Digit Sign (T=3.44, p=0.002), but no racial differences were seen on learning slopes for any measure. Education was not associated with any measure. First day performance on Face-Name (B=0.39, p<.001), but not learning slope B=0.008, p=0.302) was associated with the PACC5. For Groceries-Prices, both Day 1 (B=0.27, p<.001) and learning slope (B=0.02, p=0.03) were associated with PACC-5. The Digit-Sign scores at Day 1 (B=0.31, p<.001) and learning slope (B=0.06, p<.001) were also both associated with PACC-5.
Conclusions:
Seven days of remote, brief cognitive assessment was feasible in a sample of cognitively unimpaired older adults. Although various demographic factors were associated with initial performance on the tests, multi-day-learning slopes were largely unrelated to demographics, signaling the possibility of its utility in diverse samples. Both initial performance and learning scores on an associative memory and processing speed test were independently related to baseline cognition indicating that these tests’ initial performance and learning metrics are convergent but unique in their contributions. The findings signal the value of measuring differences in learning across days as a means towards sensitively identifying differences in cognitive function before signs of frank impairment are observed. Next steps will involve identifying the optimal way to model multi-day learning on these subtests to evaluate their potential associations with Alzheimer’s disease biomarkers.
The gold standard for hand hygiene (HH) while wearing gloves requires removing gloves, performing HH, and donning new gloves between WHO moments. The novel strategy of applying alcohol-based hand rub (ABHR) directly to gloved hands might be effective and efficient.
Design:
A mixed-method, multicenter, 3-arm, randomized trial.
Setting:
Adult and pediatric medical-surgical, intermediate, and intensive care units at 4 hospitals.
Participants:
Healthcare personnel (HCP).
Interventions:
HCP were randomized to 3 groups: ABHR applied directly to gloved hands, the current standard, or usual care.
Methods:
Gloved hands were sampled via direct imprint. Gold-standard and usual-care arms were compared with the ABHR intervention.
Results:
Bacteria were identified on gloved hands after 432 (67.4%) of 641 observations in the gold-standard arm versus 548 (82.8%) of 662 observations in the intervention arm (P < .01). HH required a mean of 14 seconds in the intervention and a mean of 28.7 seconds in the gold-standard arm (P < .01). Bacteria were identified on gloved hands after 133 (98.5%) of 135 observations in the usual-care arm versus 173 (76.6%) of 226 observations in the intervention arm (P < .01). Of 331 gloves tested 6 (1.8%) were found to have microperforations; all were identified in the intervention arm [6 (2.9%) of 205].
Conclusions:
Compared with usual care, contamination of gloved hands was significantly reduced by applying ABHR directly to gloved hands but statistically higher than the gold standard. Given time savings and microbiological benefit over usual care and lack of feasibility of adhering to the gold standard, the Centers for Disease Control and Prevention and the World Health Organization should consider advising HCP to decontaminate gloved hands with ABHR when HH moments arise during single-patient encounters.
Multiplex polymerase chain reaction (PCR) respiratory panels are rapid, highly sensitive tests for viral and bacterial pathogens that cause respiratory infections. In this study, we (1) described best practices in the implementation of respiratory panels based on expert perspectives and (2) identified tools for diagnostic stewardship to enhance the usefulness of testing.
Methods:
We conducted a survey of the Society for Healthcare Epidemiology of America Research Network to explore current and future approaches to diagnostic stewardship of multiplex PCR respiratory panels.
Results:
In total, 41 sites completed the survey (response rate, 50%). Multiplex PCR respiratory panels were perceived as supporting accurate diagnoses at 35 sites (85%), supporting more efficient patient care at 33 sites (80%), and improving patient outcomes at 23 sites (56%). Thirteen sites (32%) reported that testing may support diagnosis or patient care without improving patient outcomes. Furthermore, 24 sites (58%) had implemented diagnostic stewardship, with a median of 3 interventions (interquartile range, 1–4) per site. The interventions most frequently reported as effective were structured order sets to guide test ordering (4 sites), restrictions on test ordering based on clinician or patient characteristics (3 sites), and structured communication of results (2 sites). Education was reported as “helpful” but with limitations (3 sites).
Conclusions:
Many hospital epidemiologists and experts in infectious diseases perceive multiplex PCR respiratory panels as useful tests that can improve diagnosis, patient care, and patient outcomes. However, institutions frequently employ diagnostic stewardship to enhance the usefulness of testing, including most commonly clinical decision support to guide test ordering.
OBJECTIVES/GOALS: Glioblastomas (GBMs) are heterogeneous, treatment-resistant tumors that are driven by populations of cancer stem cells (CSCs). In this study, we perform an epigenetic-focused functional genomics screen in GBM organoids and identify WDR5 as an essential epigenetic regulator in the SOX2-enriched, therapy resistant cancer stem cell niche. METHODS/STUDY POPULATION: Despite their importance for tumor growth, few molecular mechanisms critical for CSC population maintenance have been exploited for therapeutic development. We developed a spatially resolved loss-of-function screen in GBM patient-derived organoids to identify essential epigenetic regulators in the SOX2-enriched, therapy resistant niche. Our niche-specific screens identified WDR5, an H3K4 histone methyltransferase responsible for activating specific gene expression, as indispensable for GBM CSC growth and survival. RESULTS/ANTICIPATED RESULTS: In GBM CSC models, WDR5 inhibitors blocked WRAD complex assembly and reduced H3K4 trimethylation and expression of genes involved in CSC-relevant oncogenic pathways. H3K4me3 peaks lost with WDR5 inhibitor treatment occurred disproportionally on POU transcription factor motifs, required for stem cell maintenance and including the POU5F1(OCT4)::SOX2 motif. We incorporated a SOX2/OCT4 motif driven GFP reporter system into our CSC cell models and found that WDR5 inhibitor treatment resulted in dose-dependent silencing of stem cell reporter activity. Further, WDR5 inhibitor treatment altered the stem cell state, disrupting CSC in vitro growth and self-renewal as well as in vivo tumor growth. DISCUSSION/SIGNIFICANCE: Our results unveiled the role of WDR5 in maintaining the CSC state in GBM and provide a rationale for therapeutic development of WDR5 inhibitors for GBM and other advanced cancers. This conceptual and experimental framework can be applied to many cancers, and can unmask unique microenvironmental biology and rationally designed combination therapies.
Reward processing has been proposed to underpin the atypical social feature of autism spectrum disorder (ASD). However, previous neuroimaging studies have yielded inconsistent results regarding the specificity of atypicalities for social reward processing in ASD.
Aims
Utilising a large sample, we aimed to assess reward processing in response to reward type (social, monetary) and reward phase (anticipation, delivery) in ASD.
Method
Functional magnetic resonance imaging during social and monetary reward anticipation and delivery was performed in 212 individuals with ASD (7.6–30.6 years of age) and 181 typically developing participants (7.6–30.8 years of age).
Results
Across social and monetary reward anticipation, whole-brain analyses showed hypoactivation of the right ventral striatum in participants with ASD compared with typically developing participants. Further, region of interest analysis across both reward types yielded ASD-related hypoactivation in both the left and right ventral striatum. Across delivery of social and monetary reward, hyperactivation of the ventral striatum in individuals with ASD did not survive correction for multiple comparisons. Dimensional analyses of autism and attention-deficit hyperactivity disorder (ADHD) scores were not significant. In categorical analyses, post hoc comparisons showed that ASD effects were most pronounced in participants with ASD without co-occurring ADHD.
Conclusions
Our results do not support current theories linking atypical social interaction in ASD to specific alterations in social reward processing. Instead, they point towards a generalised hypoactivity of ventral striatum in ASD during anticipation of both social and monetary rewards. We suggest this indicates attenuated reward seeking in ASD independent of social content and that elevated ADHD symptoms may attenuate altered reward seeking in ASD.
We examine a Query Theory account of risky choice framing effects — when risky choices are framed as a gain, people are generally risky averse but, when an equivalent choice is framed as a loss, people are risk seeking. Consistent with Query Theory, frames affected the structure of participants’ arguments: gain frame participants listed arguments favoring the certain option earlier and more often than loss frame participants. These argumentative shifts mediated framing effects; manipulating participants initial arguments attenuated them. While emotions, as measured by PANAS, were related to frames but not related to choices, an exploratory text analysis of the affective valence of arguments was related to both. Compared to loss-frame participants, gain-frame participants expressed more positive sentiment towards the certain option than the risky option. This relative-sentiment index predicted choices by itself but not when included with structure of arguments. Further, manipulated initial arguments did not significantly affect participant’s relative sentiment. Prior to changing choices, risky choice frames alter both the structure and emotional valence of participants’ internal arguments.
High-quality evidence from prospective longitudinal studies in humans is essential to testing hypotheses related to the developmental origins of health and disease. In this paper, the authors draw upon their own experiences leading birth cohorts with longitudinal follow-up into adulthood to describe specific challenges and lessons learned. Challenges are substantial and grow over time. Long-term funding is essential for study operations and critical to retaining study staff, who develop relationships with participants and hold important institutional knowledge and technical skill sets. To maintain contact, we recommend that cohorts apply multiple strategies for tracking and obtain as much high-quality contact information as possible before the child’s 18th birthday. To maximize engagement, we suggest that cohorts offer flexibility in visit timing, length, location, frequency, and type. Data collection may entail multiple modalities, even at a single collection timepoint, including measures that are self-reported, research-measured, and administrative with a mix of remote and in-person collection. Many topics highly relevant for adolescent and young adult health and well-being are considered to be private in nature, and their assessment requires sensitivity. To motivate ongoing participation, cohorts must work to understand participant barriers and motivators, share scientific findings, and provide appropriate compensation for participation. It is essential for cohorts to strive for broad representation including individuals from higher risk populations, not only among the participants but also the staff. Successful longitudinal follow-up of a study population ultimately requires flexibility, adaptability, appropriate incentives, and opportunities for feedback from participants.
Policies that promote conversion of antibiotics from intravenous to oral route administration are considered “low hanging fruit” for hospital antimicrobial stewardship programs. We developed a simple metric based on digestive days of therapy divided by total days of therapy for targeted agents and a method for hospital comparisons. External comparisons may help identify opportunities for improving prospective implementation.
Crop residue can intercept and adsorb residual herbicides, leading to reduced efficacy. However, adsorption can sometimes be reversed by rainfall or irrigation. Greenhouse experiments were conducted to evaluate the effect of differential overhead irrigation level on barnyardgrass response to acetochlor, pyroxasulfone, and pendimethalin applied to bare soil or wheat straw–covered soil. Acetochlor applied to wheat straw–covered soil resulted in 25% to 40% reduced control, 30 to 50 more plants 213 cm−2, and greater biomass than bare soil applications, regardless of irrigation amount. Barnyardgrass suppression by pyroxasulfone applications to wheat straw–covered soil improved with increased irrigation; however, weed control levels similar to bare soil applications were not observed after any irrigation amount. Barnyardgrass densities from pyroxasulfone applications to bare soil decreased with irrigation but did not change in applications to wheat straw–covered soil. Aboveground barnyardgrass biomass from pyroxasulfone decreased with greater irrigation amounts in both bare soil and wheat straw–covered soil applications; however, decreased efficacy in wheat straw–covered soil applications was not alleviated with irrigation. Pendimethalin was the only herbicide tested that displayed reduced efficacy when irrigation amounts increased in applications to both bare soil and wheat straw–covered soil. Barnyardgrass control from pendimethalin applied to wheat straw–covered soil was similar to bare soil applications when approximately 0.3 to 1.2 cm of irrigation was applied; however, irrigation amounts greater than 1.2 cm resulted in greater barnyardgrass control in bare soil applications. No differences between wheat straw–covered soil and bare soil applications of pendimethalin were observed for barnyardgrass densities. These data indicate that increased irrigation or rainfall level can increase efficacy of acetochlor and pyroxasulfone. Optimal rainfall or irrigation amounts required for efficacy similar to bare soil applications are herbicide specific, and some herbicides, such as pendimethalin, may be adversely affected by increased rainfall or irrigation.
The Trial Innovation Network has established an infrastructure for single IRB review in response to federal policies. The Network’s single IRB (sIRBs) have successfully supported over 70 multisite studies via more than 800 reliance arrangements. This has generated several lessons learned that can benefit the national clinical research enterprise, as we work to improve the conduct of clinical trials. These lessons include distinguishing the roles of the single IRB from institutional Human Research Protections programs, establishing a consistent sIRB review model, standardizing collection of local context and supplemental, study-specific information, and educating and empowering lead study teams to support their sites.
A chloroacetamide herbicide by application timing factorial experiment was conducted in 2017 and 2018 in Mississippi to investigate chloroacetamide use in a dicamba-based Palmer amaranth management program in cotton production. Herbicides used were S-metolachlor or acetochlor, and application timings were preemergence, preemergence followed by (fb) early postemergence, preemergence fb late postemergence, early postemergence alone, late postemergence alone, and early postemergence fb late postemergence. Dicamba was included in all preemergence applications, and dicamba plus glyphosate was included with all postemergence applications. Differences in cotton and weed response due to chloroacetamide type were minimal, and cotton injury at 14 d after late postemergence application was less than 10% for all application timings. Late-season weed control was reduced up to 30% and 53% if chloroacetamide application occurred preemergence or late postemergence only, respectively. Late-season weed densities were minimized if multiple applications were used instead of a single application. Cotton height was reduced by up to 23% if a single application was made late postemergence relative to other application timings. Chloroacetamide application at any timing except preemergence alone minimized late-season weed biomass. Yield was maximized by any treatment involving multiple applications or early postemergence alone, whereas applications preemergence or late postemergence alone resulted in up to 56% and 27% yield losses, respectively. While no yield loss was reported by delaying the first of sequential applications until early postemergence, forgoing a preemergence application is not advisable given the multiple factors that may delay timely postemergence applications such as inclement weather.
In the UK, acute mental healthcare is provided by in-patient wards and crisis resolution teams. Readmission to acute care following discharge is common. Acute day units (ADUs) are also provided in some areas.
Aims
To assess predictors of readmission to acute mental healthcare following discharge in England, including availability of ADUs.
Method
We enrolled a national cohort of adults discharged from acute mental healthcare in the English National Health Service (NHS) between 2013 and 2015, determined the risk of readmission to either in-patient or crisis teams, and used multivariable, multilevel logistic models to evaluate predictors of readmission.
Results
Of a total of 231 998 eligible individuals discharged from acute mental healthcare, 49 547 (21.4%) were readmitted within 6 months, with a median time to readmission of 34 days (interquartile range 10–88 days). Most variation in readmission (98%) was attributable to individual patient-level rather than provider (trust)-level effects (2.0%). Risk of readmission was not associated with local availability of ADUs (adjusted odds ratio 0.96, 95% CI 0.80–1.15). Statistically significant elevated risks were identified for participants who were female, older, single, from Black or mixed ethnic groups, or from more deprived areas. Clinical predictors included shorter index admission, psychosis and being an in-patient at baseline.
Conclusions
Relapse and readmission to acute mental healthcare are common following discharge and occur early. Readmission was not influenced significantly by trust-level variables including availability of ADUs. More support for relapse prevention and symptom management may be required following discharge from acute mental healthcare.
The COVID-19 pandemic changed the clinical research landscape in America. The most urgent challenge has been to rapidly review protocols submitted by investigators that were designed to learn more about or intervene in COVID-19. International Review Board (IRB) offices developed plans to rapidly review protocols related to the COVID-19 pandemic. An online survey was conducted with the IRB Directors at Clinical and Translational Science Awards (CTSA) institutions as well as two focus groups. Across the CTSA institutions, 66% reviewed COVID-19 protocols across all their IRB committees, 22% assigned protocols to just one committee, and 10% created a new committee for COVID-19 protocols. Fifty-two percent reported COVID-19 protocols were reviewed much faster, 41% somewhat faster, and 7% at the same speed as other protocols. Three percent reported that the COVID-19 protocols were reviewed with much better quality, 32% reported slightly better quality, and 65% reported the reviews were of the same quality as similar protocols before the COVID-19 pandemic. IRBs were able to respond to the emergent demand for reviewing COVID-19 protocols. Most of the increased review capacity was due to extra effort by IRB staff and members and not changes that will be easily implemented across all research going forward.
Spontaneous capillary flow of liquids in narrow spaces plays a key role in a plethora of applications including lab-on-a-chip devices, heat pipes, propellant management devices in spacecrafts and flexible printed electronics manufacturing. In this work we use a combination of theory and experiment to examine capillary-flow dynamics in open rectangular microchannels, which are often found in these applications. Scanning electron microscopy and profilometry are used to highlight the complexity of the free-surface morphology. We develop a self-similar lubrication-theory-based model accounting for this complexity and compare model predictions to those from the widely used modified Lucas–Washburn model, as well as experimental observations over a wide range of channel aspect ratios $\lambda$ and equilibrium contact angles $\theta _0$. We demonstrate that for large $\lambda$ the two model predictions are indistinguishable, whereas for smaller $\lambda$ the lubrication-theory-based model agrees better with experiments. The lubrication-theory-based model is also shown to have better agreement with experiments at smaller $\theta _0$, although as $\theta _0\rightarrow {\rm \pi}/4$ it fails to account for important axial curvature contributions to the free surface and the agreement worsens. Finally, we show that the lubrication-theory-based model also quantitatively predicts the dynamics of fingers that extend ahead of the meniscus. These findings elucidate the limitations of the modified Lucas–Washburn model and demonstrate the importance of accounting for the effects of complex free-surface morphology on capillary-flow dynamics in open rectangular microchannels.