We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Neuropsychiatry training in the UK currently lacks a formal scheme or qualification, and its demand and availability have not been systematically explored. We conducted the largest UK-wide survey of psychiatry trainees to examine their experiences in neuropsychiatry training.
Results
In total, 185 trainees from all UK training regions completed the survey. Although 43.6% expressed interest in a neuropsychiatry career, only 10% felt they would gain sufficient experience by the end of training. Insufficient access to clinical rotations was the most common barrier, with significantly better access in London compared with other regions. Most respondents were in favour of additional neurology training (83%) and a formal accreditation in neuropsychiatry (90%).
Clinical implications
Strong trainee interest in neuropsychiatry contrasts with the limited training opportunities currently available nationally. Our survey highlights the need for increased neuropsychiatry training opportunities, development of a formalised training programme and a clinical accreditation pathway for neuropsychiatry in the UK.
Successfully educating urgent care patients on appropriate use and risks of antibiotics can be challenging. We assessed the conscious and subconscious impact various educational materials (informational handout, priming poster, and commitment poster) had on urgent care patients’ knowledge and expectations regarding antibiotics.
Design:
Stratified Block Randomized Control Trial.
Setting:
Urgent care centers (UCCs) in Colorado, Florida, Georgia, and New Jersey.
Participants:
Urgent care patients.
Methods:
We randomized 29 UCCs across six study arms to display specific educational materials (informational handout, priming poster, and commitment poster). The primary intention-to-treat (ITT) analysis evaluated whether the materials impacted patient knowledge or expectations of antibiotic prescribing by assigned study arm. The secondary as-treated analysis evaluated the same outcome comparing patients who recalled seeing the assigned educational material and patients who either did not recall seeing an assigned material or were in the control arm.
Results:
Twenty-seven centers returned 2,919 questionnaires across six study arms. Only 27.2% of participants in the intervention arms recalled seeing any educational materials. In our primary ITT analysis, no difference in knowledge or expectations of antibiotic prescribing was noted between groups. However, in the as-treated analysis, the handout and commitment poster were associated with higher antibiotic knowledge scores.
Conclusions:
Educational materials in UCCs are associated with increased antibiotic-related knowledge among patients when they are seen and recalled; however, most patients do not recall passively displayed materials. More emphasis should be placed on creating and drawing attention to memorable patient educational materials.
Accelerating COVID-19 Treatment Interventions and Vaccines (ACTIV) was initiated by the US government to rapidly develop and test vaccines and therapeutics against COVID-19 in 2020. The ACTIV Therapeutics-Clinical Working Group selected ACTIV trial teams and clinical networks to expeditiously develop and launch master protocols based on therapeutic targets and patient populations. The suite of clinical trials was designed to collectively inform therapeutic care for COVID-19 outpatient, inpatient, and intensive care populations globally. In this report, we highlight challenges, strategies, and solutions around clinical protocol development and regulatory approval to document our experience and propose plans for future similar healthcare emergencies.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
Homeless shelter residents and staff may be at higher risk of SARS-CoV-2 infection. However, SARS-CoV-2 infection estimates in this population have been reliant on cross-sectional or outbreak investigation data. We conducted routine surveillance and outbreak testing in 23 homeless shelters in King County, Washington, to estimate the occurrence of laboratory-confirmed SARS-CoV-2 infection and risk factors during 1 January 2020–31 May 2021. Symptom surveys and nasal swabs were collected for SARS-CoV-2 testing by RT-PCR for residents aged ≥3 months and staff. We collected 12,915 specimens from 2,930 unique participants. We identified 4.74 (95% CI 4.00–5.58) SARS-CoV-2 infections per 100 individuals (residents: 4.96, 95% CI 4.12–5.91; staff: 3.86, 95% CI 2.43–5.79). Most infections were asymptomatic at the time of detection (74%) and detected during routine surveillance (73%). Outbreak testing yielded higher test positivity than routine surveillance (2.7% versus 0.9%). Among those infected, residents were less likely to report symptoms than staff. Participants who were vaccinated against seasonal influenza and were current smokers had lower odds of having an infection detected. Active surveillance that includes SARS-CoV-2 testing of all persons is essential in ascertaining the true burden of SARS-CoV-2 infections among residents and staff of congregate settings.
We studied how patient beliefs regarding the need for antibiotics, as measured by expectation scores, and antibiotic prescribing outcome affect patient satisfaction using data from 2,710 urgent-care visits. Satisfaction was affected by antibiotic prescribing among patients with medium–high expectation scores but not among patients with low expectation scores.
Background: Taking antibiotics outside the guidance of a clinician (nonprescription use) is a potential safety issue and runs counter to antibiotic stewardship efforts. We identified the symptoms and illnesses and situations that may predispose patients to take antibiotics, and we compared these findings between patients attending public primary care clinics and private emergency departments. Methods: A cross-sectional survey was conducted between January 2020 and March 2021 in 6 primary care clinics and 2 emergency departments in the United States. We queried patients about 5 symptoms and illnesses (Fig. 1) and 14 situations (Fig. 2) to investigate whether these would lead the patients to take antibiotics without a prescription. We used the χ2 test to compare the symptoms and illnesses and situations between the respondents from public and private healthcare systems. We set the P value for significance at <.025. Results: In total, the survey had 564 respondents (median age, 49.7 years; range, 19–92), and 72% were female. Most respondents identified as either Hispanic or Latina/Latino (46.6%) or African American or Black (33%), followed by White (15.8%), and other (4.6%). Most respondents had visited public clinics (72%). The most common insurance status for our respondents included Medicaid or county financial assistance program (56.6%), followed by private insurance or Medicare (36.7%) and self-pay (6.7%). In public primary care clinics, only 23% had private insurance or Medicare compared to 72.9% in private emergency departments. Of those surveyed, 69% agreed that antibiotics would improve the recovery from sinus infections, followed by bronchitis (64%), sore throat (64%), cold/flu (61.4%), and diarrhea (31.5%). The proportions of respondents who believed that antibiotics would improve the recovery from diarrhea (36.2% vs 19.4%; P = .004) and sore throat (59.9% vs 48.4%; P < .001) were significantly higher among public versus private outpatient respondents. We did not find significant differences for cold/flu, sinus infection, or bronchitis between these 2 healthcare systems (Fig. 1). In 11 of the 14 situations, patients in public clinics were more likely to report a likelihood of using nonprescription antibiotics than the patients visiting the private emergency rooms (Fig. 2). Conclusions: Future stewardship interventions should be aware of the symptoms and illnesses and situations that may influence outpatients to take nonprescription antibiotics. Addressing modifiable factors (eg, leftover antibiotics, antibiotics given by friends or family, and antibiotics available without a prescription in stores or markets) may also curtail these unsafe practices and reduce antibiotic resistance.
Contrasting the well-described effects of early intervention (EI) services for youth-onset psychosis, the potential benefits of the intervention for adult-onset psychosis are uncertain. This paper aims to examine the effectiveness of EI on functioning and symptomatic improvement in adult-onset psychosis, and the optimal duration of the intervention.
Methods
360 psychosis patients aged 26–55 years were randomized to receive either standard care (SC, n = 120), or case management for two (2-year EI, n = 120) or 4 years (4-year EI, n = 120) in a 4-year rater-masked, parallel-group, superiority, randomized controlled trial of treatment effectiveness (Clinicaltrials.gov: NCT00919620). Primary (i.e. social and occupational functioning) and secondary outcomes (i.e. positive and negative symptoms, and quality of life) were assessed at baseline, 6-month, and yearly for 4 years.
Results
Compared with SC, patients with 4-year EI had better Role Functioning Scale (RFS) immediate [interaction estimate = 0.008, 95% confidence interval (CI) = 0.001–0.014, p = 0.02] and extended social network (interaction estimate = 0.011, 95% CI = 0.004–0.018, p = 0.003) scores. Specifically, these improvements were observed in the first 2 years. Compared with the 2-year EI group, the 4-year EI group had better RFS total (p = 0.01), immediate (p = 0.01), and extended social network (p = 0.05) scores at the fourth year. Meanwhile, the 4-year (p = 0.02) and 2-year EI (p = 0.004) group had less severe symptoms than the SC group at the first year.
Conclusions
Specialized EI treatment for psychosis patients aged 26–55 should be provided for at least the initial 2 years of illness. Further treatment up to 4 years confers little benefits in this age range over the course of the study.
Although mania is the hallmark symptom of bipolar I disorder (BD-I), most patients initially present for treatment with depressive symptoms. Misdiagnosis of BD-I as major depressive disorder (MDD) is common, potentially resulting in poor outcomes and inappropriate antidepressant monotherapy treatment. Screening patients with depressive symptoms is a practical strategy to help healthcare providers (HCPs) identify when additional assessment for BD-I is warranted. The new 6-item Rapid Mood Screener (RMS) is a pragmatic patient-reported BD-I screening tool that relies on easily understood terminology to screen for manic symptoms and other BD-I features in <2 minutes. The RMS was validated in an observational study in patients with clinically confirmed BD-I (n=67) or MDD (n=72). When 4 or more items were endorsed (“yes”), the sensitivity of the RMS for identifying patients with BP-I was 0.88 and specificity was 0.80; positive and negative predictive values were 0.80 and 0.88, respectively. To more thoroughly understand screening tool use among HCPs, a 10-minute survey was conducted.
Methods
A nationwide sample of HCPs (N=200) was selected using multiple HCP panels; HCPs were asked to describe their opinions/current use of screening tools, assess the RMS, and evaluate the RMS versus the widely recognized Mood Disorder Questionnaire (MDQ). Results were reported by grouped specialties (primary care physicians, general nurse practitioners [NPs]/physician assistants [PAs], psychiatrists, and psychiatric NPs/PAs). Included HCPs were in practice <30 years, spent at least 75% of their time in clinical practice, saw at least 10 patients with depression per month, and diagnosed MDD or BD in at least 1 patient per month. Findings were reported using descriptive statistics; statistical significance was reported at the 95% confidence interval.
Results
Among HCPs, 82% used a tool to screen for MDD, while 32% used a tool for BD. Screening tool attributes considered to be of the greatest value included sensitivity (68%), easy to answer questions (66%), specificity (65%), confidence in results (64%), and practicality (62%). Of HCPs familiar with screening tools, 70% thought the RMS was at least somewhat better than other screening tools. Most HCPs were aware of the MDQ (85%), but only 29% reported current use. Most HCPs (81%) preferred the RMS to the MDQ, and the RMS significantly outperformed the MDQ across valued attributes; 76% reported that they were likely to use the RMS to screen new patients with depressive symptoms. A total of 84% said the RMS would have a positive impact on their practice, with 46% saying they would screen more patients for bipolar disorder.
Discussion
The RMS was viewed positively by HCPs who participated in a brief survey. A large percentage of respondents preferred the RMS over the MDQ and indicated that they would use it in their practice. Collectively, responses indicated that the RMS is likely to have a positive impact on screening behavior.
Acute ischemic stroke may affect women and men differently. We aimed to evaluate sex differences in outcomes of endovascular treatment (EVT) for ischemic stroke due to large vessel occlusion in a population-based study in Alberta, Canada.
Methods and Results:
Over a 3-year period (April 2015–March 2018), 576 patients fit the inclusion criteria of our study and constituted the EVT group of our analysis. The medical treatment group of the ESCAPE trial had 150 patients. Thus, our total sample size was 726. We captured outcomes in clinical routine using administrative data and a linked database methodology. The primary outcome of our study was home-time. Home-time refers to the number of days that the patient was back at their premorbid living situation without an increase in the level of care within 90 days of the index stroke event. In adjusted analysis, EVT was associated with an increase of 90-day home-time by an average of 6.08 (95% CI −2.74–14.89, p-value 0.177) days in women compared to an average of 11.20 (95% CI 1.94–20.46, p-value 0.018) days in men. Further analysis revealed that the association between EVT and 90-day home-time in women was confounded by age and onset-to-treatment time.
Conclusions:
We found a nonsignificant nominal reduction of 90-day home-time gain for women compared to men in this province-wide population-based study of EVT for large vessel occlusion, which was only partially explained by confounding.
We document a positive and strong correlation between speciation and extinction rates in the Paleozoic zooplankton graptoloid clade, between 481 and 419 Ma. This correlation has a magnitude of ~0.35–0.45 and manifests at a temporal resolution of <50 kyr and, for part of our data set, <25 kyr. It cannot be explained as an artifact of the method used to measure rates, sampling bias, bias resulting from construction of the time series, autocorrelation, underestimation of species durations, or undetected phyletic evolution. Correlations are approximately equal during the Ordovician and Silurian, despite the very different speciation and extinction regimes prevailing during these two periods, and correlation is strongest in the shortest-lived cohorts of species.
We infer that this correlation reflects approximately synchronous coupling of speciation and extinction in the graptoloids on timescales of a few tens of thousands of years. Almost half of graptoloid species in our data set have durations of <0.5 Myr, and previous studies have demonstrated that, during times of background extinction, short-lived species were selectively targeted by extinction. These observations may be consistent with the model of ephemeral speciation, whereby new species are inferred to form constantly and at high rate, but most of them disappear rapidly through extinction or reabsorption into the ancestral lineage. Diversity dependence with a lag of ~1 Myr, also documented elsewhere, may reflect a subsequent and relatively slow, competitive dynamic that governed those species that dispersed beyond their originating water mass and escaped the ephemeral species filter.
The rocky shores of the north-east Atlantic have been long studied. Our focus is from Gibraltar to Norway plus the Azores and Iceland. Phylogeographic processes shape biogeographic patterns of biodiversity. Long-term and broadscale studies have shown the responses of biota to past climate fluctuations and more recent anthropogenic climate change. Inter- and intra-specific species interactions along sharp local environmental gradients shape distributions and community structure and hence ecosystem functioning. Shifts in domination by fucoids in shelter to barnacles/mussels in exposure are mediated by grazing by patellid limpets. Further south fucoids become increasingly rare, with species disappearing or restricted to estuarine refuges, caused by greater desiccation and grazing pressure. Mesoscale processes influence bottom-up nutrient forcing and larval supply, hence affecting species abundance and distribution, and can be proximate factors setting range edges (e.g., the English Channel, the Iberian Peninsula). Impacts of invasive non-native species are reviewed. Knowledge gaps such as the work on rockpools and host–parasite dynamics are also outlined.
A significant minority of people presenting with a major depressive episode (MDE) experience co-occurring subsyndromal hypo/manic symptoms. As this presentation may have important prognostic and treatment implications, the DSM–5 codified a new nosological entity, the “mixed features specifier,” referring to individuals meeting threshold criteria for an MDE and subthreshold symptoms of (hypo)mania or to individuals with syndromal mania and subthreshold depressive symptoms. The mixed features specifier adds to a growing list of monikers that have been put forward to describe phenotypes characterized by the admixture of depressive and hypomanic symptoms (e.g., mixed depression, depression with mixed features, or depressive mixed states [DMX]). Current treatment guidelines, regulatory approvals, as well the current evidentiary base provide insufficient decision support to practitioners who provide care to individuals presenting with an MDE with mixed features. In addition, all existing psychotropic agents evaluated in mixed patients have largely been confined to patient populations meeting the DSM–IV definition of “mixed states” wherein the co-occurrence of threshold-level mania and threshold-level MDE was required. Toward the aim of assisting clinicians providing care to adults with MDE and mixed features, we have assembled a panel of experts on mood disorders to develop these guidelines on the recognition and treatment of mixed depression, based on the few studies that have focused specifically on DMX as well as decades of cumulated clinical experience.
Although extinction risk has been found to have a consistent negative relationship with geographic range across wide temporal and taxonomic scales, the effect has been difficult to disentangle from factors such as sampling, ecological niche, or clade. In addition, studies of extinction risk have focused on benthic invertebrates with less work on planktic taxa. We employed a global set of 1114 planktic graptolite species from the Ordovician to lower Devonian to analyze the predictive power of species’ traits and abiotic factors on extinction risk, combining general linear models (GLMs), partial least-squares regression (PLSR), and permutation tests. Factors included measures of geographic range, sampling, and graptolite-specific factors such as clade, biofacies affiliation, shallow water tolerance, and age cohorts split at the base of the Katian and Rhuddanian stages.
The percent variance in durations explained varied substantially between taxon subsets from 12% to 45%. Overall commonness, the correlated effects of geographic range and sampling, was the strongest, most consistent factor (12–30% variance explained), with clade and age cohort adding up to 18% and other factors <10%. Surprisingly, geographic range alone contributed little explanatory power (<5%). It is likely that this is a consequence of a nonlinear relationship between geographic range and extinction risk, wherein the largest reductions in extinction risk are gained from moderate expansion of small geographic ranges. Thus, even large differences in range size between graptolite species did not lead to a proportionate difference in extinction risk because of the large average ranges of these species. Finally, we emphasize that the common practice of determining the geographic range of taxa from the union of all occurrences over their duration poses a substantial risk of overestimating the geographic scope of the realized ecological niche and, thus, of further conflating sampling effects on observed duration with the biological effects of range size on extinction risk.
The imposition of male sexual characteristics onto the female (imposex) is present in wild populations of the non-native veined rapa whelk (Rapana venosa) in Chesapeake Bay, USA but does not appear to compromise reproductive function. Cultured whelks were used to test two hypotheses: (1) Observed imposex metrics will be similar to tributyltin (TBT) water concentrations at each of three sites; (2) Male and imposex/female whelks from the same site will have similar TBT body burdens. Cultured 2-year-old whelks were transplanted to three field sites in the York River, USA at the onset of their second reproductive season. Transplant site mean TBT water concentrations ranged from 1.4 ± 0.77 to 64.2 ± 57.8 ng l−1. Imposex incidence was 100% after 28 weeks with an observed M:F:IF ratio of 81:0:92 across all sites. Imposex stages (median vas deferens scale index = 4) and reproductive output were similar across sites. The imposex severity (IS = penis length/shell length) increased with increasing TBT concentrations. The relative penis length (RPLI) and relative penis size (RPSI) indices were positively related to site-specific TBT levels. Male whelks accumulated significantly higher TBT concentrations than female whelks at the site with the highest TBT concentration. Mean TBT concentrations in whelk egg capsules were significantly higher than concentrations in male or female whelk tissue. Egg capsule deposition provides a depuration mechanism for female whelks to reduce body burden of lipophilic TBT. Sex, season and reproductive status should be considered when using gastropod bioaccumulation to monitor TBT effects.
In recent years several authors have questioned the reality of a widely accepted and apparently large increase in marine biodiversity through the Cenozoic. Here we use collection-level occurrence data from the rich and uniquely well documented New Zealand (NZ) shelfal marine mollusc fauna to test this question at a regional scale. Because the NZ data were generated by a small number of workers and have been databased over many decades, we have been able to either avoid or quantify many of the biases inherent in analyses of past biodiversity. In particular, our major conclusions are robust to several potential taphonomic and systematic biases and methodological uncertainties, namely non-uniform loss of aragonitic faunas, biostratigraphic range errors, taxonomic errors, choice of time bins, choice of analytical protocols, and taxonomic rank of analysis.
The number of taxa sampled increases through the Cenozoic. Once diversity estimates are standardized for sampling biases, however, we see no evidence for an increase in marine mollusc diversity in the NZ region through the middle and late Cenozoic. Instead, diversity has been approximately constant for much of the past 40 Myr and, at the species and genus levels, has declined over the past ~5 Myr. Assuming that the result for NZ shelfal molluscs is representative of other taxonomic groups and other temperate faunal provinces, then this suggests that the postulated global increase in diversity is either an artifact of sampling bias or analytical methods, resulted from increasing provinciality, or was driven by large increases in diversity in tropical regions. We see no evidence for a species-area effect on diversity. Likewise, we are unable to demonstrate a relationship between marine temperature and diversity, although this question should be re-examined once refined shallow marine temperature estimates become available.
Recent studies suggest that sand can serve as a vehicle for exposure of humans to pathogens at beach sites, resulting in increased health risks. Sampling for microorganisms in sand should therefore be considered for inclusion in regulatory programmes aimed at protecting recreational beach users from infectious disease. Here, we review the literature on pathogen levels in beach sand, and their potential for affecting human health. In an effort to provide specific recommendations for sand sampling programmes, we outline published guidelines for beach monitoring programmes, which are currently focused exclusively on measuring microbial levels in water. We also provide background on spatial distribution and temporal characteristics of microbes in sand, as these factors influence sampling programmes. First steps toward establishing a sand sampling programme include identifying appropriate beach sites and use of initial sanitary assessments to refine site selection. A tiered approach is recommended for monitoring. This approach would include the analysis of samples from many sites for faecal indicator organisms and other conventional analytes, while testing for specific pathogens and unconventional indicators is reserved for high-risk sites. Given the diversity of microbes found in sand, studies are urgently needed to identify the most significant aetiological agent of disease and to relate microbial measurements in sand to human health risk.