We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The consent process for research studies can be burdensome for potential participants due to complex information and lengthy consent forms. This pragmatic study aimed to improve the consent experience and evaluate its impact on participant decision making, study knowledge, and satisfaction with the In Our DNA SC program, a population-based genomic screening initiative. We compared two consent procedures: standard consent (SC) involving a PDF document and enhanced consent (EC) incorporating a pictograph and true or false questions. Decision-making control, study knowledge, satisfaction, and time to consent were assessed. We analyzed data for 109 individuals who completed the SC and 96 who completed the EC. Results indicated strong decision-making control and high levels of knowledge and satisfaction in both groups. While no significant differences were found between the two groups, the EC experience took longer for participants to complete. Future modifications include incorporating video modules and launching a Spanish version of the consent experience. Overall, this study contributes to the growing literature on consent improvements and highlights the need to assess salient components and explore participant preferences for receiving consent information.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
The global increase in observed forest dieback, characterized by the death of tree foliage, heralds widespread decline in forest ecosystems. This degradation causes significant changes to ecosystem services and functions, including habitat provision and carbon sequestration, which can be difficult to detect using traditional monitoring techniques, highlighting the need for large-scale and high-frequency monitoring. Contemporary developments in the instruments and methods to gather and process data at large scales mean this monitoring is now possible. In particular, the advancement of low-cost drone technology and deep learning on consumer-level hardware provide new opportunities. Here, we use an approach based on deep learning and vegetation indices to assess crown dieback from RGB aerial data without the need for expensive instrumentation such as LiDAR. We use an iterative approach to match crown footprints predicted by deep learning with field-based inventory data from a Mediterranean ecosystem exhibiting drought-induced dieback, and compare expert field-based crown dieback estimation with vegetation index-based estimates. We obtain high overall segmentation accuracy (mAP: 0.519) without the need for additional technical development of the underlying Mask R-CNN model, underscoring the potential of these approaches for non-expert use and proving their applicability to real-world conservation. We also find that color-coordinate based estimates of dieback correlate well with expert field-based estimation. Substituting ground truth for Mask R-CNN model predictions showed negligible impact on dieback estimates, indicating robustness. Our findings demonstrate the potential of automated data collection and processing, including the application of deep learning, to improve the coverage, speed, and cost of forest dieback monitoring.
The pandemic has halted the traditional way of life as we used to know it. Due to the highly contagious nature of the virus, physical distancing had become the primary norm for reducing the spread, inevitably leading to social isolation. The older adult population is vulnerable to environmental changes, making them very prone to stress during disasters. Comorbidities, lack of social support, loneliness and uncertainty can be common precipitating factors. The National Institute of Mental Health & Neurosciences, with the Ministry of Health & Family Welfare, commenced a helpline to provide psychosocial support and mental health services in thirteen languages to distressed persons across the Indian subcontinent. The study aims to explore the help-seeking factors due to which older adult callers have sought help from the helpline during the COVID-19 pandemic by analysing the call recordings and, as a secondary objective, to develop a checklist to assess the psychosocial issues of older adults to be used by telephone- based psychosocial care providers. The researcher would use a “Naturalised” conceptual framework of transcription, which would necessitate a literal interpretation of the call recordings. Recordings of the calls made will be transcribed. “Thematic analysis” shall be conducted to find psychosocial issues older adult callers face. Categories would be identified, refined, and specified for coding. A series of key-informant interviews would be conducted online with a group of mental health professionals (defined as per the Mental Health Care Act, 2017) associated with or working in geriatric mental health. The findings from the study would help look into the evolution of psychosocial needs of the older adult population during a pandemic and would also reflect the different aspects of telephone-based psychosocial support and mental health services and their need during disasters. The study’s outcome would reveal the needs of this at-risk populace and explore the issues and concerns unique to the COVID- 19 pandemic. The findings would also be a substructure for future studies that would probe into research areas analogues to pandemics and other biological disasters, telephone-based psychosocial support, and the older adult populace.
Florpyrauxifen-benzyl was commercialized in 2018 to target barnyardgrass and aquatic or broadleaf weeds. Field studies were conducted from 2019 to 2021 in Stoneville, MS, to evaluate barnyardgrass control following a simulated failure of florpyrauxifen-benzyl or other common postemergence rice herbicides. In the first field study, florpyrauxifen-benzyl was applied at 0 and 15 g ai ha–1 to rice at the two- to three-leaf stage to simulate a failed application targeting barnyardgrass. Sequential herbicide treatments included no herbicide and full rates of imazethapyr, quinclorac, bispyribac-Na, and cyhalofop applied 7 or 14 d after florpyrauxifen-benzyl treatment. The second field study was designed to evaluate barnyardgrass control with florpyrauxifen-benzyl following simulated failure of postemergence rice herbicides. Initial herbicide treatments included no herbicide and half rates of imazethapyr, quinclorac, bispyribac-Na, and propanil. Sequential applications at 7 or 14 d after the initial herbicide treatments included florpyrauxifen-benzyl at 0 and 30 g ai ha–1. Results from the first study indicated barnyardgrass control 21 d after final treatment (DAFT) was greater with sequential treatments at 7 compared with 14 d after initial treatment (DA-I) with no initial application of florpyrauxifen-benzyl. Therefore, delaying sequential treatments until 14 d after initial florpyrauxifen-benzyl at 15 g ha–1 allowed barnyardgrass to become too large to control with other rice herbicides. Rough rice yield was reduced in plots where quinclorac application was delayed from 7 to 14 DA-I with no initial application of florpyrauxifen-benzyl. The second study suggested that florpyrauxifen-benzyl application should be delayed 14 d after a herbicide failure. Although no differences in barnyardgrass control 21 DAFT were detected whether florpyrauxifen-benzyl was applied 7 or 14 DA-I of any herbicide utilized, >85% control was only achieved when florpyrauxifen-benzyl application was delayed 14 DA-I. These results demonstrate barnyardgrass control options following simulated failed applications of common rice herbicides.
Recent conceptualizations of concussion symptoms have begun to shift from a latent perspective (which suggests a common cause; i.e., head injury), to a network perspective (where symptoms influence and interact with each other throughout injury and recovery). Recent research has examined the network structure of the Post-Concussion Symptom Scale (PCSS) cross-sectionally at pre-and post-concussion, with the most important symptoms including dizziness, sadness, and feeling more emotional. However, within-subject comparisons between network structures at pre-and post-concussion have yet to be made. These analyses can provide invaluable information on whether concussion alters symptom interactions. This study examined within-athlete changes in PCSS network connectivity and centrality (the importance of different symptoms within the networks) from baseline to post-concussion.
Participants and Methods:
Participants were selected from a larger longitudinal database of high school athletes who completed the PCSS in English as part of their standard athletic training protocol (N=1,561). The PCSS is a 22-item self-report measure of common concussion symptoms (i.e., headache, vomiting, dizziness, etc.) in which individuals rate symptom severity on a 7-point Likert scale. Participants were excluded if they endorsed history of brain surgery, neurodevelopmental disorder, or treatment history for epilepsy, migraines, psychiatric disorders, or alcohol/substance use. Network analysis was conducted on PCSS ratings from a baseline and acute post-concussion (within 72-hours post-injury) assessment. In each network, the nodes represented individual symptoms, and the edges connecting them their partial correlations. Estimations of the regularized partial correlation networks were completed using the Gaussian graphical model, and the GLASSO algorithm was used for regularization. Each symptom’s expected influence (the sum of its partial correlations with other symptoms) was calculated to identify the most central symptoms in each network. Recommended techniques from Epskamp et al. (2018) were completed for assessing the accuracy of the estimated symptom importance and relationships. Network Comparison Tests were conducted to observe changes in network connectivity, structure, and node influence.
Results:
Both baseline and acute post-concussion networks contained negative and positive relationships. The expected influence of symptoms was stable in both networks, with difficulty concentrating having the greatest expected influence in both. The strongest edges in the networks were between symptoms within similar domains of functioning (e.g., sleeping less was associated with trouble falling asleep). Network connectivity was not significantly different between networks (S=0.43), suggesting the overall degree to which symptoms are related was not different at acute post-concussion. Network structure significantly differed at acute post-concussion (M=0.305), suggesting specific relationships in the acute post-concussion network were different than they were at baseline. In the acute post concussion network, vomiting was less central and sensitivity to noise and mentally foggy more central.
Conclusions:
PCSS network structure at acute post-concussion is altered, suggesting concussion may disrupt symptom networks and certain symptoms’ associations with the experience of others after sustaining a concussive injury. Future research should compare PCSS networks later in recovery to examine if similar structural changes remain or return to baseline structure, with the potential that observing PCSS network structure changes post-concussion could inform symptom resolution trajectories.
Previous studies have found differences between monolingual and bilingual athletes on ImPACT, the most widely used sport-related concussion (SRC) assessment measure. Most recently, results suggest that monolingual English-Speaking athletes outperformed bilingual English- and Spanish-speaking athletes on Visual Motor Speed and Reaction Time composites. Before further investigation of these differences can occur, measurement invariance of ImPACT must be established to ensure that differences are not attributable to measurement error. The current study aimed to 1) replicate a recently identified four-factor model using cognitive subtest scores of ImPACT on baseline assessments in monolingual English-Speaking athletes and bilingual English- and Spanish-speaking athletes and 2) to establish measurement invariance across groups.
Participants and Methods:
Participants included high school athletes who were administered the ImPACT as part of their standard pre-season athletic training protocol in English. Participants were excluded if they had a self-reported history of concussion, Autism, ADHD, learning disability or treatment history of epilepsy/seizures, brain surgery, meningitis, psychiatric disorders, or substance/alcohol use. The final sample included 7,948 monolingual English-speaking athletes and 7,938 bilingual English- and Spanish-speaking athletes with valid baseline assessments. Language variables were based on self-report. As the number of monolingual athletes was substantially larger than the number of bilingual athletes, monolingual athletes were randomly selected from a larger sample to match the bilingual athletes on age, sex, and sport. Confirmatory factor analysis (CFA) was used to test competing models, including one-factor, two-factor, and three-factor models to determine if a recently identified four-factor model (Visual Memory, Visual Reaction Time, Verbal Memory, Working Memory) provided the best fit of the data. Eighteen subtest scores from ImPACT were used in the CFAs. Through increasingly restrictive multigroup CFAs (MGCFA), configural, metric, scalar, and residual levels of invariance were assessed by language group.
Results:
CFA indicated that the four-factor model provided the best fit in the monolingual and bilingual samples compared to competing models. However, some goodness-of-fit-statistics were below recommended cutoffs, and thus, post-hoc model modifications were made on a theoretical basis and by examination of modification indices. The modified four-factor model had adequate to superior fit and met criteria for all goodness-of-fit indices and was retained as the configural model to test measurement invariance across language groups. MGCFA revealed that residual invariance, the strictest level of invariance, was achieved across groups.
Conclusions:
This study provides support for a modified four-factor model as estimating the latent structure of ImPACT cognitive scores in monolingual English-speaking and bilingual English- and Spanish-speaking high school athletes at baseline assessment. Results further suggest that differences between monolingual English-speaking and bilingual English- and Spanish-speaking athletes reported in prior ImPACT studies are not caused by measurement error. The reason for these differences remains unclear but are consistent with other studies suggesting monolingual advantages. Given the increase in bilingual individuals in the United States, and among high school athletics, future research should investigate other sources of error such as item bias and predictive validity to further understand if group differences reflect real differences between these athletes.
Long-term exposure to the psychoactive ingredient in cannabis, delta-9-tetrahydrocanabinol (THC), has been consistently raised as a notable risk factor for schizophrenia. Additionally, cannabis is frequently used as a coping mechanism for individuals diagnosed with schizophrenia. Cannabis use in schizophrenia has been associated with greater severity of psychotic symptoms, non-compliance with medication, and increased relapse rates. Neuropsychological changes have also been implicated in long-term cannabis use and the course of illness of schizophrenia. However, the impact of co-occurring cannabis use in individuals with schizophrenia on cognitive functioning is less thoroughly explored. The purpose of this meta-analysis was to examine whether neuropsychological test performance and symptoms in schizophrenia differ as a function of THC use status. A second aim of this study was to examine whether symptom severity moderates the relationship between THC use and cognitive test performance among people with schizophrenia.
Participants and Methods:
Peer-reviewed articles comparing schizophrenia with and without cannabis use disorder (SZ SUD+; SZ SUD-) were selected from three scholarly databases; Ovid, Google Scholar, and PubMed. The following search terms were applied to yield studies for inclusion: neuropsychology, cognition, cognitive, THC, cannabis, marijuana, and schizophrenia. 11 articles containing data on psychotic symptoms and neurocognition, with SZ SUD+ and SZ SUD- groups, were included in the final analyses. Six domains of neurocognition were identified across included articles (Processing Speed, Attention, Working Memory, Verbal Learning Memory, and Reasoning and Problem Solving). Positive and negative symptom data was derived from eligible studies consisting of the Positive and Negative Syndrome Scale (PANSS), the Scale for the Assessment of Positive Symptoms (SAPS), the Scale for the Assessment of Negative Symptoms (SANS), Self-Evaluation of Negative Symptoms (SNS), Brief Psychiatric Rating Scale (BPRS), and Structured Clinical Interview for DSM Disorders (SCID) scores. Meta analysis and meta-regression was conducted using R.
Results:
No statistically significant differences were observed between SZ SUD+ and SZ SUD-across the cognitive domains of Processing Speed, Attention, Working Memory, Verbal Learning Memory, and Reasoning and Problem Solving. Positive symptom severity was found to moderate the relationship between THC use and processing speed, but not negative symptoms. Positive and negative symptom severity did not significantly moderate the relationship between THC use and the other cognitive domains.
Conclusions:
Positive symptoms moderated the relationship between cannabis use and processing speed among people with schizophrenia. The reasons for this are unclear, and require further exploration. Additional investigation is warranted to better understand the impact of THC use on other tests of neuropsychological performance and symptoms in schizophrenia.
Hippocampal hyperperfusion has been observed in people at Clinical High Risk for Psychosis (CHR), is associated with adverse longitudinal outcomes and represents a potential treatment target for novel pharmacotherapies. Whether cannabidiol (CBD) has ameliorative effects on hippocampal blood flow (rCBF) in CHR patients remains unknown.
Methods
Using a double-blind, parallel-group design, 33 CHR patients were randomized to a single oral 600 mg dose of CBD or placebo; 19 healthy controls did not receive any drug. Hippocampal rCBF was measured using Arterial Spin Labeling. We examined differences relating to CHR status (controls v. placebo), effects of CBD in CHR (placebo v. CBD) and linear between-group relationships, such that placebo > CBD > controls or controls > CBD > placebo, using a combination of hypothesis-driven and exploratory wholebrain analyses.
Results
Placebo-treated patients had significantly higher hippocampal rCBF bilaterally (all pFWE<0.01) compared to healthy controls. There were no suprathreshold effects in the CBD v. placebo contrast. However, we found a significant linear relationship in the right hippocampus (pFWE = 0.035) such that rCBF was highest in the placebo group, lowest in controls and intermediate in the CBD group. Exploratory wholebrain results replicated previous findings of hyperperfusion in the hippocampus, striatum and midbrain in CHR patients, and provided novel evidence of increased rCBF in inferior-temporal and lateral-occipital regions in patients under CBD compared to placebo.
Conclusions
These findings suggest that hippocampal blood flow is elevated in the CHR state and may be partially normalized by a single dose of CBD. CBD therefore merits further investigation as a potential novel treatment for this population.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
Bilingualism is thought to confer advantages in executive functioning, thereby contributing to cognitive reserve and a later age of dementia symptom onset. While the relation between bilingualism and age of onset has been explored in Alzheimer's dementia, there are few studies examining bilingualism as a contributor to cognitive reserve in frontotemporal dementia (FTD). In line with previous findings, we hypothesized that bilinguals with behavioral variant FTD would be older at symptom onset compared to monolinguals, but that no such effect would be found in patients with nonfluent/agrammatic variant primary progressive aphasia (PPA) or semantic variant PPA. Contrary to our hypothesis, we found no significant difference in age at symptom onset between monolingual and bilingual speakers within any of the FTD variants, and there were no notable differences on neuropsychological measures. Overall, our results do not support a protective effect of bilingualism in patients with FTD-spectrum disease in a U.S. based cohort.
Understanding spatial variation in origination and extinction can help to unravel the mechanisms underlying macroevolutionary patterns. Although methods have been developed for estimating global origination and extinction rates from the fossil record, no framework exists for applying these methods to restricted spatial regions. Here, we test the efficacy of three metrics for regional analysis, using simulated fossil occurrences. These metrics are then applied to the marine invertebrate record of the Permian and Triassic to examine variation in extinction and origination rates across latitudes. Extinction and origination rates were generally uniform across latitudes for these time intervals, including during the Capitanian and Permian–Triassic mass extinctions. The small magnitude of this variation, combined with the possibility of its attribution to sampling bias, cautions against linking any observed differences to contrasting evolutionary dynamics. Our results indicate that origination and extinction levels were more variable across clades than across latitudes.
We used psychological methods to investigate how two prominent interventions, participatory decision making and enforcement, influence voluntary cooperation in a common-pool resource dilemma. Groups (N=40) harvested resources from a shared resource pool. Individuals in the Voted-Enforce condition voted on conservation rules and could use economic sanctions to enforce them. In other conditions, individuals could not vote (Imposed-Enforce condition), lacked enforcement (Voted condition), or both (Imposed condition). Cooperation was strongest in the Voted-Enforce condition (Phase 2). Moreover, these groups continued to cooperate voluntarily after enforcement was removed later in the experiment. Cooperation was weakest in the Imposed-Enforce condition and degraded after enforcement ceased. Thus, enforcement improved voluntary cooperation only when individuals voted. Perceptions of procedural justice, self-determination, and security were highest in the Voted-Enforced condition. These factors (legitimacy, security) increased voluntary cooperation by promoting rule acceptance and internalized motivation. Voted-Enforce participants also felt closer to one another (i.e., self-other merging), further contributing to their cooperation. Neither voting nor enforcement produced these sustained psychological conditions alone. Voting lacked security without enforcement (Voted condition), so the individuals who disliked the rule (i.e., the losing voters) pillaged the resource. Enforcement lacked legitimacy without voting (Imposed-Enforce condition), so it crowded out internal reasons for cooperation. Governance interventions should carefully promote security without stifling fundamental needs (e.g., procedural justice) or undermining internal motives for cooperation.
We apply the author's computational approach to groups to our empirical work studying and modelling riots. We suggest that assigning roles in particular gives insight, and measuring the frequency of bystander behaviour provides a method to understand the dynamic nature of intergroup conflict, allowing social identity to be incorporated into models of riots.
The University of California, Los Angeles (UCLA) Library Data Science Center (DSC) is a research and education unit supporting faculty, researchers and students through consultation, instruction, co-curricular programming and data infrastructure. It provides a wide range of researcher support and development in data and computationally intensive scholarship, geospatial analysis and emerging technologies. Since 2018, the DSC has developed services that provide education and support for the increasingly complex research landscape.
This chapter outlines the process used to create the new services. It gives context to the Center's origins as the Social Science Data Archive (SSDA) that provided social science data services at UCLA from the 1970s. The chapter examines how integrating the SSDA into the Library in 2014 led to a shift of focus toward a service that supports data creation, interpretation and publication regardless of discipline or methodology. It articulates the drivers for change on the UCLA campus that led to the redesign of service offerings and describes how the DSC's involvement with the Carpentries movement expanded its ability to teach data and coding skills. The chapter also reflects on the challenges faced in establishing a service profile that is non-traditional for a library while focusing on building an inclusive community that democratizes data science tools and their research applications.
UCLA: context
UCLA is a public research institution located in Los Angeles, California. UCLA has a diverse community of scholars that encompasses nearly 30,000 undergraduates pursuing 125 majors, 13,000 graduate students in 59 research programs and over 7,000 faculty members. To support its research activities, the University deployed a department-based research support infrastructure. Research data support has been heavily siloed across campus, depending on when and where departments can access resources to support these endeavors. Several distinct groups have emerged that provide different support layers for other disciplines. For example, researchers in STEM (science, technology, engineering and mathematics) fields have ready access to course-integrated resources in campus units such as the UCLA Collaboratory in the Institute for Quantitative and Computational Biology and the Office of Advanced Research Computing. These institutes have large staffs and support thousands of researchers annually.
In contrast, departments in social sciences, humanities and arts lack access to similar institutes or infrastructure. However, data-intensive research is a part of nearly every discipline's research workflow.
The coronavirus disease 2019 (COVID-19) pandemic was one of the significant causes of death worldwide in 2020. The disease is caused by severe acute coronavirus syndrome (SARS) coronavirus 2 (SARS-CoV-2), an RNA virus of the subfamily Orthocoronavirinae related to 2 other clinically relevant coronaviruses, SARS-CoV and MERS-CoV. Like other coronaviruses and several other viruses, SARS-CoV-2 originated in bats. However, unlike other coronaviruses, SARS-CoV-2 resulted in a devastating pandemic. The SARS-CoV-2 pandemic rages on due to viral evolution that leads to more transmissible and immune evasive variants. Technology such as genomic sequencing has driven the shift from syndromic to molecular epidemiology and promises better understanding of variants. The COVID-19 pandemic has exposed critical impediments that must be addressed to develop the science of pandemics. Much of the progress is being applied in the developed world. However, barriers to the use of molecular epidemiology in low- and middle-income countries (LMICs) remain, including lack of logistics for equipment and reagents and lack of training in analysis. We review the molecular epidemiology literature to understand its origins from the SARS epidemic (2002–2003) through influenza events and the current COVID-19 pandemic. We advocate for improved genomic surveillance of SARS-CoV and understanding the pathogen diversity in potential zoonotic hosts. This work will require training in phylogenetic and high-performance computing to improve analyses of the origin and spread of pathogens. The overarching goals are to understand and abate zoonosis risk through interdisciplinary collaboration and lowering logistical barriers.
COVID-19 vaccines are likely to be scarce for years to come. Many countries, from India to the U.K., have demonstrated vaccine nationalism. What are the ethical limits to this vaccine nationalism? Neither extreme nationalism nor extreme cosmopolitanism is ethically justifiable. Instead, we propose the fair priority for residents (FPR) framework, in which governments can retain COVID-19 vaccine doses for their residents only to the extent that they are needed to maintain a noncrisis level of mortality while they are implementing reasonable public health interventions. Practically, a noncrisis level of mortality is that experienced during a bad influenza season, which society considers an acceptable background risk. Governments take action to limit mortality from influenza, but there is no emergency that includes severe lockdowns. This “flu-risk standard” is a nonarbitrary and generally accepted heuristic. Mortality above the flu-risk standard justifies greater governmental interventions, including retaining vaccines for a country's own citizens over global need. The precise level of vaccination needed to meet the flu-risk standard will depend upon empirical factors related to the pandemic. This links the ethical principles to the scientific data emerging from the emergency. Thus, the FPR framework recognizes that governments should prioritize procuring vaccines for their country when doing so is necessary to reduce mortality to noncrisis flu-like levels. But after that, a government is obligated to do its part to share vaccines to reduce risks of mortality for people in other countries. We consider and reject objections to the FPR framework based on a country: (1) having developed a vaccine, (2) raising taxes to pay for vaccine research and purchase, (3) wanting to eliminate economic and social burdens, and (4) being ineffective in combating COVID-19 through public health interventions.
During March 27–July 14, 2020, the Centers for Disease Control and Prevention’s National Healthcare Safety Network extended its surveillance to hospital capacities responding to COVID-19 pandemic. The data showed wide variations across hospitals in case burden, bed occupancies, ventilator usage, and healthcare personnel and supply status. These data were used to inform emergency responses.
Using data from the National Healthcare Safety Network (NHSN), we assessed changes to intensive care unit (ICU) bed capacity during the early months of the COVID-19 pandemic. Changes in capacity varied by hospital type and size. ICU beds increased by 36%, highlighting the pressure placed on hospitals during the pandemic.
The First Episode Rapid Early Intervention for Eating Disorders (FREED) service model is associated with significant reductions in wait times and improved clinical outcomes for emerging adults with recent-onset eating disorders. An understanding of how FREED is implemented is a necessary precondition to enable an attribution of these findings to key components of the model, namely the wait-time targets and care package.
Aims
This study evaluated fidelity to the FREED service model during the multicentre FREED-Up study.
Method
Participants were 259 emerging adults (aged 16–25 years) with an eating disorder of <3 years duration, offered treatment through the FREED care pathway. Patient journey records documented patient care from screening to end of treatment. Adherence to wait-time targets (engagement call within 48 h, assessment within 2 weeks, treatment within 4 weeks) and care package, and differences in adherence across diagnosis and treatment group were examined.
Results
There were significant increases (16–40%) in adherence to the wait-time targets following the introduction of FREED, irrespective of diagnosis. Receiving FREED under optimal conditions also increased adherence to the targets. Care package use differed by component and diagnosis. The most used care package activities were psychoeducation and dietary change. Attention to transitions was less well used.
Conclusions
This study provides an indication of adherence levels to key components of the FREED model. These adherence rates can tentatively be considered as clinically meaningful thresholds. Results highlight aspects of the model and its implementation that warrant future examination.