We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The description and delineation of trematode species is a major ongoing task. Across the field there has been, and currently still is, great variation in the standard of this work and in the sophistication of the proposal of taxonomic hypotheses. Although most species are relatively unambiguously distinct from their congeners, many are either morphologically very similar, including the major and rapidly growing component of cryptic species, or are highly variable morphologically despite little to no molecular variation for standard DNA markers. Here we review challenges in species delineation in the context provided to us by the historical literature, and the use of morphological, geographical, host, and molecular data. We observe that there are potential challenges associated with all these information sources. As a result, we encourage careful proposal of taxonomic hypotheses with consideration for underlying species concepts and frank acknowledgement of weaknesses or conflict in the data. It seems clear that there is no single source of data that provides a wholly reliable answer to our taxonomic challenges but that nuanced consideration of information from multiple sources (the ‘integrated approach’) provides the best possibility of developing hypotheses that will stand the test of time.
To quantify the impact of patient- and unit-level risk adjustment on infant hospital-onset bacteremia (HOB) standardized infection ratio (SIR) ranking.
Design:
A retrospective, multicenter cohort study.
Setting and participants:
Infants admitted to 284 neonatal intensive care units (NICUs) in the United States between 2016 and 2021.
Methods:
Expected HOB rates and SIRs were calculated using four adjustment strategies: birthweight (model 1), birthweight and postnatal age (model 2), birthweight and NICU complexity (model 3), and birthweight, postnatal age, and NICU complexity (model 4). Sites were ranked according to the unadjusted HOB rate, and these rankings were compared to rankings based on the four adjusted SIR models.
Results:
Compared to unadjusted HOB rate ranking (smallest to largest), the number and proportion of NICUs that left the fourth quartile (worst-performing) following adjustments were as follows: adjusted for birthweight (16, 22.5%), birthweight and postnatal age (19, 26.8%), birthweight and NICU complexity (22, 31.0%), birthweight, postnatal age and NICU complexity (23, 32.4%). Comparing NICUs that moved into the better-performing quartiles after birthweight adjustment to those that remained in the better-performing quartiles regardless of adjustment, the median percentage of low birthweight infants was 17.1% (Interquartile Range (IQR): 15.8, 19.2) vs 8.7% (IQR: 4.8, 12.6); and the median percentage of infants who died was 2.2% (IQR: 1.8, 3.1) vs 0.5% (IQR: 0.01, 12.0), respectively.
Conclusion:
Adjusting for patient and unit-level complexity moved one-third of NICUs in the worst-performing quartile into a better-performing quartile. Risk adjustment may allow for a more accurate comparison across units with varying levels of patient acuity and complexity.
The global population and status of Snowy Owls Bubo scandiacus are particularly challenging to assess because individuals are irruptive and nomadic, and the breeding range is restricted to the remote circumpolar Arctic tundra. The International Union for Conservation of Nature (IUCN) uplisted the Snowy Owl to “Vulnerable” in 2017 because the suggested population estimates appeared considerably lower than historical estimates, and it recommended actions to clarify the population size, structure, and trends. Here we present a broad review and status assessment, an effort led by the International Snowy Owl Working Group (ISOWG) and researchers from around the world, to estimate population trends and the current global status of the Snowy Owl. We use long-term breeding data, genetic studies, satellite-GPS tracking, and survival estimates to assess current population trends at several monitoring sites in the Arctic and we review the ecology and threats throughout the Snowy Owl range. An assessment of the available data suggests that current estimates of a worldwide population of 14,000–28,000 breeding adults are plausible. Our assessment of population trends at five long-term monitoring sites suggests that breeding populations of Snowy Owls in the Arctic have decreased by more than 30% over the past three generations and the species should continue to be categorised as Vulnerable under the IUCN Red List Criterion A2. We offer research recommendations to improve our understanding of Snowy Owl biology and future population assessments in a changing world.
The gut microbiome is impacted by certain types of dietary fibre. However, the type, duration and dose needed to elicit gut microbial changes and whether these changes also influence microbial metabolites remain unclear. This study investigated the effects of supplementing healthy participants with two types of non-digestible carbohydrates (resistant starch (RS) and polydextrose (PD)) on the stool microbiota and microbial metabolite concentrations in plasma, stool and urine, as secondary outcomes in the Dietary Intervention Stem Cells and Colorectal Cancer (DISC) Study. The DISC study was a double-blind, randomised controlled trial that supplemented healthy participants with RS and/or PD or placebo for 50 d in a 2 × 2 factorial design. DNA was extracted from stool samples collected pre- and post-intervention, and V4 16S rRNA gene sequencing was used to profile the gut microbiota. Metabolite concentrations were measured in stool, plasma and urine by high-performance liquid chromatography. A total of fifty-eight participants with paired samples available were included. After 50 d, no effects of RS or PD were detected on composition of the gut microbiota diversity (alpha- and beta-diversity), on genus relative abundance or on metabolite concentrations. However, Drichlet’s multinomial mixture clustering-based approach suggests that some participants changed microbial enterotype post-intervention. The gut microbiota and fecal, plasma and urinary microbial metabolites were stable in response to a 50-d fibre intervention in middle-aged adults. Larger and longer studies, including those which explore the effects of specific fibre sub-types, may be required to determine the relationships between fibre intake, the gut microbiome and host health.
Major depressive disorder (MDD) is a tremendous global disease burden and the leading cause of disability worldwide. Unfortunately, individuals diagnosed with MDD typically experience a delayed response to traditional antidepressants and many do not adequately respond to pharmacotherapy, even after multiple trials. The critical need for novel antidepressant treatments has led to a recent resurgence in the clinical application of psychedelics, and intravenous ketamine, which has been investigated as a rapid-acting treatment for treatment resistant depression (TRD) as well acute suicidal ideation and behavior. However, variations in the type and quality of experimental design as well as a range of treatment outcomes in clinical trials of ketamine make interpretation of this large body of literature challenging.
Objectives
This umbrella review aims to advance our understanding of the effectiveness of intravenous ketamine as a pharmacotherapy for TRD by providing a systematic, quantitative, large-scale synthesis of the empirical literature.
Methods
We performed a comprehensive PubMed search for peer-reviewed meta-analyses of primary studies of intravenous ketamine used in the treatment of TRD. Meta-analysis and primary studies were then screened by two independent coding teams according to pre-established inclusion criteria as well as PRISMA and METRICS guidelines. We then employed metaumbrella, a statistical package developed in R, to perform effect size calculations and conversions as well as statistical tests.
Results
In a large-scale analysis of 1,182 participants across 51 primary studies, repeated-dose administration of intravenous ketamine demonstrated statistically significant effects (p<0.05) compared to placebo-controlled as well as other experimental conditions in patients with TRD, as measured by standardized clinician-administered and self-report depression symptom severity scales.
Conclusions
This study provides large-scale, quantitative support for the effectiveness of intravenous, repeated-dose ketamine as a therapy for TRD and a report of the relative effectiveness of several treatment parameters across a large and rapidly growing literature. Future investigations should use similar analytic tools to examine evidence-stratified conditions and the comparative effectiveness of other routes of administration and treatment schedules as well as the moderating influence of other clinical and demographic variables on the effectiveness of ketamine on TRD and suicidal ideation and behavior.
There has been rapidly growing interest in understanding the pharmaceutical and clinical properties of psychedelic and dissociative drugs, with a particular focus on ketamine. This compound, long known for its anesthetic and dissociative properties, has garnered attention due to its potential to rapidly alleviate symptoms of depression, especially in individuals with treatment-resistant depression (TRD) or acute suicidal ideation or behavior. However, while ketamine’s psychopharmacological effects are increasingly well-documented, the specific patterns of its neural impact remain a subject of exploration and basic questions remain about its effects on functional activation in both clinical and healthy populations.
Objectives
This meta-analysis seeks to contribute to the evolving landscape of neuroscience research on dissociative drugs such as ketamine by comprehensively examining the effects of acute ketamine administration on neural activation, as measured by functional magnetic resonance imaging (fMRI), in healthy participants.
Methods
We conducted a meta-analysis of existing fMRI activation studies of ketamine using multilevel kernel density analysis (MKDA). Following a comprehensive PubMed search, we quantitatively synthesized all published primary fMRI whole-brain activation studies of the effects of ketamine in healthy subjects with no overlapping samples (N=18). This approach also incorporated ensemble thresholding (α=0.05-0.0001) to minimize cluster-size detection bias and Monte Carlo simulations to correct for multiple comparisons.
Results
Our meta-analysis revealed statistically significant (p<0.05-0.0001; FWE-corrected) alterations in neural activation in multiple cortical and subcortical regions following the administration of ketamine to healthy participants (N=306).
Conclusions
These results offer valuable insights into the functional neuroanatomical effects caused by acute ketamine administration. These findings may also inform development of therapeutic applications of ketamine for various psychiatric and neurological conditions. Future studies should investigate the neural effects of ketamine administration, including both short-term and long-term effects, in clinical populations and their relation to clinical and functional improvements.
Bipolar I disorder (BD-I) is a chronic and recurrent mood disorder characterized by alternating episodes of depression and mania; it is also associated with substantial morbidity and mortality and with clinically significant functional impairments. While previous studies have used functional magnetic resonance imaging (fMRI) to examine neural abnormalities associated with BD-I, they have yielded mixed findings, perhaps due to differences in sampling and experimental design, including highly variable mood states at the time of scan.
Objectives
The purpose of this study is to advance our understanding of the neural basis of BD-I and mania, as measured by fMRI activation studies, and to inform the development of more effective brain-based diagnostic systems and clinical treatments.
Methods
We conducted a large-scale meta-analysis of whole-brain fMRI activation studies that compared participants with BD-I, assessed during a manic episode, to age-matched healthy controls. Following PRISMA guidelines, we conducted a comprehensive PubMed literature search using two independent coding teams to evaluate primary studies according to pre-established inclusion criteria. We then used multilevel kernel density analysis (MKDA), a well-established, voxel-wise, whole-brain, meta-analytic approach, to quantitatively synthesize all qualifying primary fMRI activation studies of mania. We used ensemble thresholding (p<0.05-0.0001) to minimize cluster size detection bias, and 10,000 Monte Carlo simulations to correct for multiple comparisons.
Results
We found that participants with BD-I (N=2,042), during an active episode of mania and relative to age-matched healthy controls (N=1,764), exhibit a pattern of significantly (p<0.05-0.0001; FWE-corrected) different activation in multiple brain regions of the cerebral cortex and basal ganglia across a variety of experimental tasks.
Conclusions
This study supports the formulation of a robust neural basis for BD-I during manic episodes and advances our understanding of the pattern of abnormal activation in this disorder. These results may inform the development of novel brain-based clinical tools for bipolar disorder such as diagnostic biomarkers, non-invasive brain stimulation, and treatment-matching protocols. Future studies should compare the neural signatures of BD-I to other related disorders to facilitate the development of protocols for differential diagnosis and improve treatment outcomes in patients with BD-I.
Attention-deficit/hyperactivity disorder (ADHD) is a highly prevalent psychiatric condition that frequently originates in early development and is associated with a variety of functional impairments. Despite a large functional neuroimaging literature on ADHD, our understanding of the neural basis of this disorder remains limited, and existing primary studies on the topic include somewhat divergent results.
Objectives
The present meta-analysis aims to advance our understanding of the neural basis of ADHD by identifying the most statistically robust patterns of abnormal neural activation throughout the whole-brain in individuals diagnosed with ADHD compared to age-matched healthy controls.
Methods
We conducted a meta-analysis of task-based functional magnetic resonance imaging (fMRI) activation studies of ADHD. This included, according to PRISMA guidelines, a comprehensive PubMed search and predetermined inclusion criteria as well as two independent coding teams who evaluated studies and included all task-based, whole-brain, fMRI activation studies that compared participants diagnosed with ADHD to age-matched healthy controls. We then performed multilevel kernel density analysis (MKDA) a well-established, whole-brain, voxelwise approach that quantitatively combines existing primary fMRI studies, with ensemble thresholding (p<0.05-0.0001) and multiple comparisons correction.
Results
Participants diagnosed with ADHD (N=1,550), relative to age-matched healthy controls (N=1,340), exhibited statistically significant (p<0.05-0.0001; FWE-corrected) patterns of abnormal activation in multiple brains of the cerebral cortex and basal ganglia across a variety of cognitive control tasks.
Conclusions
This study advances our understanding of the neural basis of ADHD and may aid in the development of new brain-based clinical interventions as well as diagnostic tools and treatment matching protocols for patients with ADHD. Future studies should also investigate the similarities and differences in neural signatures between ADHD and other highly comorbid psychiatric disorders.
Diagnostic stewardship is increasingly recognized as a powerful tool to improve patient safety. Given the close relationship between diagnostic testing and antimicrobial misuse, antimicrobial stewardship (AMS) pharmacists should be key members of the diagnostic team. Pharmacists practicing in AMS already frequently engage with clinicians to improve the diagnostic process and have many skills needed for the implementation of diagnostic stewardship initiatives. As diagnostic stewardship becomes more broadly used, all infectious disease clinicians, including pharmacists, must collaborate to optimize patient care.
High-quality evidence is lacking for the impact on healthcare utilisation of short-stay alternatives to psychiatric inpatient services for people experiencing acute and/or complex mental health crises (known in England as psychiatric decision units [PDUs]). We assessed the extent to which changes in psychiatric hospital and emergency department (ED) activity were explained by implementation of PDUs in England using a quasi-experimental approach.
Methods
We conducted an interrupted time series (ITS) analysis of weekly aggregated data pre- and post-PDU implementation in one rural and two urban sites using segmented regression, adjusting for temporal and seasonal trends. Primary outcomes were changes in the number of voluntary inpatient admissions to (acute) adult psychiatric wards and number of ED adult mental health-related attendances in the 24 months post-PDU implementation compared to that in the 24 months pre-PDU implementation.
Results
The two PDUs (one urban and one rural) with longer (average) stays and high staff-to-patient ratios observed post-PDU decreases in the pattern of weekly voluntary psychiatric admissions relative to pre-PDU trend (Rural: −0.45%/week, 95% confidence interval [CI] = −0.78%, −0.12%; Urban: −0.49%/week, 95% CI = −0.73%, −0.25%); PDU implementation in each was associated with an estimated 35–38% reduction in total voluntary admissions in the post-PDU period. The (urban) PDU with the highest throughput, lowest staff-to-patient ratio and shortest average stay observed a 20% (−20.4%, CI = −29.7%, −10.0%) level reduction in mental health-related ED attendances post-PDU, although there was little impact on long-term trend. Pooled analyses across sites indicated a significant reduction in the number of voluntary admissions following PDU implementation (−16.6%, 95% CI = −23.9%, −8.5%) but no significant (long-term) trend change (−0.20%/week, 95% CI = −0.74%, 0.34%) and no short- (−2.8%, 95% CI = −19.3%, 17.0%) or long-term (0.08%/week, 95% CI = −0.13, 0.28%) effects on mental health-related ED attendances. Findings were largely unchanged in secondary (ITS) analyses that considered the introduction of other service initiatives in the study period.
Conclusions
The introduction of PDUs was associated with an immediate reduction of voluntary psychiatric inpatient admissions. The extent to which PDUs change long-term trends of voluntary psychiatric admissions or impact on psychiatric presentations at ED may be linked to their configuration. PDUs with a large capacity, short length of stay and low staff-to-patient ratio can positively impact ED mental health presentations, while PDUs with longer length of stay and higher staff-to-patient ratios have potential to reduce voluntary psychiatric admissions over an extended period. Taken as a whole, our analyses suggest that when establishing a PDU, consideration of the primary crisis-care need that underlies the creation of the unit is key.
State Medical Boards (SMBs) can take severe disciplinary actions (e.g., license revocation or suspension) against physicians who commit egregious wrongdoing in order to protect the public. However, there is noteworthy variability in the extent to which SMBs impose severe disciplinary action. In this manuscript, we present and synthesize a subset of 11 recommendations based on findings from our team’s larger consensus-building project that identified a list of 56 policies and legal provisions SMBs can use to better protect patients from egregious wrongdoing by physicians.
Background: To clarify the landscape of molecular diagnoses (MDs) in early-onset epilepsy individuals, we determined the prevalent MDs stratified by age at seizure onset (SO) and the time to MD in children with SO <36 months of life. Methods: A panel of up to 302 genes associated with epilepsy was utilized and ordering physicians provided the age of SO. Diagnostic yield analyses were performed for SO ages including <1 mo, 1-2 mo, 3-5 mo, 6-11 mo, 12-23 mo, and 24-35 mo. The time to MD (MD age - SO age) was determined for the top 10 genes in each SO category. Results: 15,074 individuals with SO <36 months of life were tested. Predominant MD findings are as follows: KCNQ2 in neonates with SO at <1mo, KCNQ2 and CDKL5 for SO between 1-2 mo, PRRT2 and SCN1A for SO between 3-11 mo, and SCN1A for SO between 12-36 months. The median time to MD varied by gene. For example, there was no delay in the median time to MD for the GLDC, KCNQ2, and SCN2A genes while the median delay for MECP2, SLC2A1, and other genes was ≥ 12 months. Conclusions: These data highlight the importance of comprehensive early testing in children with early-onset epilepsy.
Objective. The efficacy of individualized, community-based physical activity as an adjunctive smoking cessation treatment to enhance long-term smoking cessation rates was evaluated for the Lifestyle Enhancement Program (LEAP). Methods. The study was a two-arm, parallel-group, randomized controlled trial. All participants (n = 392) received cessation counseling and a nicotine patch and were randomized to physical activity (n = 199; YMCA membership and personalized exercise programming from a health coach) or an equal contact frequency wellness curriculum (n = 193). Physical activity treatment was individualized and flexible (with each participant selecting types of activities and intensity levels and being encouraged to exercise at the YMCA and at home, as well as to use “lifestyle” activity). The primary outcome (biochemically verified prolonged abstinence at 7-weeks (end of treatment) and 6- and 12-months postcessation) and secondary outcomes (7-day point prevalent tobacco abstinence (PPA), total minutes per week of leisure time physical activity and strength training) were assessed at baseline, 7 weeks, 6 months, and 12 months. Results. Prolonged abstinence in the physical activity and wellness groups was 19.6% and 25.4%, respectively, at 7-weeks, 15.1% and 16.6% at 6-months, and 14.1% and 17.1% at 12 months (all between-group P values >0.18). Similarly, PPA rates did not differ significantly between groups at any follow-up. Change from baseline leisure-time activity plus strength training increased significantly in the physical activity group at 7 weeks (P = 0.04). Across treatment groups, an increase in the number of minutes per week in strength training from baseline to 7 weeks predicted prolonged abstinence at 12 months (P ≤ 0.001). Further analyses revealed that social support, fewer years smoked, and less temptation to smoke were associated with prolonged abstinence over 12 months in both groups. Conclusions. Community-based physical activity programming, delivered as adjunctive treatment with behavioral/pharmacological cessation treatment, did not improve long-term quit rates compared to adjunctive wellness counseling plus behavioral/pharmacological cessation treatment. This trial is registered with https://beta.clinicaltrials.gov/study/NCT00403312, registration no. NCT00403312.
Many clinical trials leverage real-world data. Typically, these data are manually abstracted from electronic health records (EHRs) and entered into electronic case report forms (CRFs), a time and labor-intensive process that is also error-prone and may miss information. Automated transfer of data from EHRs to eCRFs has the potential to reduce data abstraction and entry burden as well as improve data quality and safety.
Methods:
We conducted a test of automated EHR-to-CRF data transfer for 40 participants in a clinical trial of hospitalized COVID-19 patients. We determined which coordinator-entered data could be automated from the EHR (coverage), and the frequency with which the values from the automated EHR feed and values entered by study personnel for the actual study matched exactly (concordance).
Results:
The automated EHR feed populated 10,081/11,952 (84%) coordinator-completed values. For fields where both the automation and study personnel provided data, the values matched exactly 89% of the time. Highest concordance was for daily lab results (94%), which also required the most personnel resources (30 minutes per participant). In a detailed analysis of 196 instances where personnel and automation entered values differed, both a study coordinator and a data analyst agreed that 152 (78%) instances were a result of data entry error.
Conclusions:
An automated EHR feed has the potential to significantly decrease study personnel effort while improving the accuracy of CRF data.
People with lived experience of incarceration have higher rates of morbidity and mortality compared to people without history of incarceration. Research conducted unethically in prisons and jails led to increased scrutiny of research to ensure the needs of those studied are protected. One consequence of increased restrictions on research with criminal-legal involved populations is reluctance to engage in research evaluations of healthcare for people who are incarcerated and people who have lived experience of incarceration. Ethical research can be done in partnership with people with lived experience of incarceration and other key stakeholders and should be encouraged. In this article, we describe how stakeholder engagement can be accomplished in this setting, and further, how such engagement leads to impactful research that can be disseminated and implemented across disciplines and communities. The goal is to build trust across the spectrum of people who work, live in, or are impacted by the criminal-legal system, with the purpose of moving toward health equity.
Humpback whales (Megaptera novaeangliae) exhibit maternally driven fidelity to feeding grounds, and yet occasionally occupy new areas. Humpback whale sightings and mortalities in the New York Bight apex (NYBA) have been increasing over the last decade, providing an opportunity to study this phenomenon in an urban habitat. Whales in this area overlap with human activities, including busy shipping traffic leading into the Port of New York and New Jersey. The site fidelity, population composition and demographics of individual whales were analysed to better inform management in this high-risk area. Whale watching and other opportunistic data collections were used to identify 101 individual humpback whales in the NYBA from spring through autumn, 2012–2018. Although mean occurrence was low (2.5 days), mean occupancy was 37.6 days, and 31.3% of whales returned from one year to the next. Individuals compared with other regional and ocean-basin-wide photo-identification catalogues (N = 52) were primarily resighted at other sites along the US East Coast, including the Gulf of Maine feeding ground. Sightings of mother-calf pairs were rare in the NYBA, suggesting that maternally directed fidelity may not be responsible for the presence of young whales in this area. Other factors including shifts in prey species distribution or changes in population structure more broadly should be investigated.
Background: Susac Syndrome (SuS) is a rare autoimmune disorder of the cerebral, retinal, and inner ear microvasculature. One of the cardinal manifestations of central nervous system (CNS) involvement is encephalopathy, however the cognitive profile in SuS is poorly characterized in the literature. Methods: In this cross-sectional case series of seven participants diagnosed with Susac Syndrome in remission in British Columbia, we use a battery of neuropsychological testing, subjective disease scores, and objective markers of disease severity to characterize the affected cognitive domains and determine if any disease characteristics predict neuropsychological performance. We also compare this battery of tests to neuroimaging markers to determine if correlation exists between radiographic markers of CNS disease and clinical evaluation of disease severity. Results: There were a variety of cognitive deficits, with memory and language dysfunction being the most common. Despite the variability, performance on some neuropsychological tests (MoCA) correlated to markers of functional disability (EDSS). Additionally, MoCA and EDSS scores correlated with neuroimaging findings of both corpus callosum and white matter changes. Finally, psychiatric scores correlated with participant reported scores of disease severity. Conclusions: There is a relationship between cognitive deficits, subjective and objective disease disability, and neuroimaging findings in Susac Syndrome.