We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Respiratory virus testing is routinely performed and ways to obtain specimens aside from a nasopharyngeal swab are needed for pandemic preparedness. The main objective is to validate a self-collected oral-nasal swab for the detection of Influenza and respiratory syncytial virus (RSV).
Design:
Diagnostic test validation of a self-collected oral nasal swab as compared to a provider-collected nasopharyngeal swab.
Setting:
Emergency Department at Michael Garron Hospital.
Participants:
Consecutive individuals who presented to the Emergency Department with a suspected viral upper respiratory tract infection were included if they self-collected an oral-nasal swab. Individuals testing positive for Influenza or RSV along with randomly selected participants who tested negative were eligible for inclusion.
Interventions:
All participants had the paired oral-nasal swab tested using a multiplex respiratory virus polymerase chain reaction for the three respiratory pathogens and compared to the nasopharyngeal swab.
Results:
48 individuals tested positive for Influenza, severe acute respiratory coronavirus virus 2 (SARS-CoV-2) or RSV along with 80 who tested negative. 110 were symptomatic with the median time from symptom onset to testing of 1 day (interquartile range 2–5 days). Using the clinical nasopharyngeal swab as the reference standard, the sensitivity was 0.75 (95% CI, 0.43–0.95) and specificity was 0.99 (95% CI, 0.93–1.00) for RSV, sensitivity is 0.67 (95% CI, 0.49–0.81) and specificity is 0.96 (95% CI, 0.89–0.99) for Influenza.
Conclusions:
Multiplex testing with a self-collected oral-nasal swab for Influenza and RSV is not an acceptable substitute for a healthcare provider collected nasopharyngeal swab primarily due to suboptimal Influenza test characteristics.
Objectives/Goals: This review examined if sleep duration is associated with established Alzheimer’s disease (AD) fluid biomarkers, such as amyloid-β peptides (Aβ40 and Aβ42), total-tau (t-tau), phosphorylated tau (p-tau181 and p-tau217), neurofilament light chain (NfL), and glial fibrillary acidic protein (GFAP). Methods/Study Population: We searched PubMed, CINAHL, and SCOPUS through September 15, 2024, using keywords and appropriate subject headings related to AD, fluid biomarkers, and sleep. The search was developed and conducted in collaboration with a medical librarian. We also searched Google Scholar and screened the reference lists of relevant reviews. Two independent reviewers screened 1,657 peer-reviewed articles, of which 21 met the inclusion criteria (14 with biomarkers measured in cerebrospinal fluid [CSF] and 7 in blood). Two review authors independently extracted study details from included articles using a standardized data extraction template. Results/Anticipated Results: Sample sizes ranged from 18 to 4,712 participants. Sleep duration was assessed using self-reported measures in 8 studies and objective measures in 13. For the 14 studies using CSF biomarkers, lower Aβ42 (3/14), Aβ40 (1/14), or the ratio (1/14) were associated with either short or long sleep duration; t-tau (3/14) and p-tau181 (4/14) levels were mostly associated with short sleep. For the 7 blood-based biomarker studies, Aβ42 (2/7), Aβ40 (2/7), and the ratio (3/7) had mixed results with either short or long sleep. T-tau (1/7) and p-tau181 (1/7) levels were associated with long sleep; NfL (2/7) was associated with both short and long sleep. Six studies reported nonlinear relationships, with both short and long sleep associated with unfavorable biomarker profiles. None of the studies investigated p-tau 217 or GFAP. Discussion/Significance of Impact: Our results suggest that the relationship between sleep duration and AD fluid biomarkers is very complex, and it highlights the importance of sleep in AD risk assessment and prevention. The inconsistency in findings stresses the need for standardized study design and measurement methods to clarify causality and inform clinical guidelines.
Objectives/Goals: The Standards for Reporting Implementation Studies (StaRI) are the Enhancing the Quality and Transparency of Health Research (EQUATOR) Network 27-item checklist for Implementation Science. This study quantifies StaRI adherence among self-defined Implementation Science studies in published Learning Health Systems (LHS) research. Methods/Study Population: A medical librarian-designed a search strategy identified original Implementation Science research published in one of the top 20 Implementation Science journals between 2017 and 2021. Inclusion criteria included studies or protocols describing the implementation of any intervention in healthcare settings. Exclusion criteria included concept papers, non-implementation research, or editorials. Full-text documents were reviewed by two investigators to abstract and judge StaRI implementation and intervention adherence, partial adherence, or non-adherence. Results/Anticipated Results: A total of 330 documents were screened, 97 met inclusion criteria, and 47 were abstracted including 30 research studies and 17 protocols. Adherence to individual StaRI reporting items ranged from 13% to 100%. Most StaRI items were reported in >60% of manuscripts and protocols. The lowest adherence in research studies was noted around economic evaluation reporting for implementation (16%) or intervention (13%) strategies, harms (13%), contextual changes (30%), or fidelity of either the intervention (34%) or implementation (53%) approach. Subgroup analyses were infrequently contemplated or reported (43%). In protocols, the implications of the implementation strategy (41%) or intervention approach (47%) were not commonly reported. Discussion/Significance of Impact: When leveraging implementation science to report reproducible and sustainable practice change initiatives, LHS researchers will need to include assessments of economics, harms, context, and fidelity in order to attain higher levels of adherence to EQUATOR’s StaRI checklist.
Late-life affective disorders (LLADs) are common and are projected to increase by 2050. There have been several studies linking late-life depression to an increased risk of dementia, but it is unclear if bipolar affective disorder or anxiety disorders pose a similar risk.
Aims
We aimed to compare the risk of LLADs progressing to all-cause dementia, and the demographic and clinical variables mediating the risk.
Methods
We used the South London and Maudsley National Health Service Foundation Trust Clinical Records Interactive Search system to identify patients aged 60 years or older with a diagnosis of any affective disorder. Cox proportional hazard models were used to determine differences in dementia risk between late-life anxiety disorders versus late-life depression, and late-life bipolar disorder versus late-life depression. Demographic and clinical characteristics associated with the risk of dementia were investigated.
Results
Some 5695 patients were identified and included in the final analysis. Of these, 388 had a diagnosis of bipolar affective disorder, 1365 had a diagnosis of an anxiety disorder and 3942 had a diagnosis of a depressive disorder. Bipolar affective disorder was associated with a lower hazard of developing dementia compared to depression (adjusted model including demographics and baseline cognition, hazard ratio: 0.60; 95% CI: 0.41–0.87). Anxiety disorders had a similar hazard of developing dementia (adjusted hazard ratio: 1.05; 95% CI: 0.90–1.22). A prior history of a depressive disorder reduced the risk of late-life depression progressing to dementia – suggesting the new onset of a depressive disorder in later life is associated with higher risk – but a prior history of anxiety disorders or bipolar affective disorder did not alter risk.
Conclusions
LLADs have a differential risk of developing all-cause dementia, with demographic- and illness-related factors influencing the risk. Further prospective cohort studies are needed to explore the link between LLADs and dementia development, and mediators of the lower risk of dementia associated with late-life bipolar disorder compared to late-life depression.
We assess the proposition that intergroup conflict (IGC) in non-human primates offers a useful comparison for studies of human IGC and its links to parochial altruism and prosociality. That is, for non-linguistic animals, social network integration and maternal influence promote juvenile engagement in IGC and can serve as the initial grounding for sociocultural processes that drive human cooperation. Using longitudinal data from three cohorts of non-adult vervet monkeys (Chlorocebus pygerythrus), we show that non-adults are sensitive to personal (age) and situational risk (participant numbers). The frequency and intensity of participation, although modulated by rank and temperament, both mirrors maternal participation and reflects non-adult centrality in the grooming network. The possibility of social induction is corroborated by the distribution of grooming during IGC, with non-adults being more likely to be groomed if they were female, higher-ranking and participants themselves. Mothers were more likely to groom younger offspring participants of either sex, whereas other adults targeted higher-ranking female participants. Although we caution against a facile alignment of these outcomes to human culturally mediated induction, there is merit in considering how the embodied act of participation and the resultant social give-and-take might serve as the basis for a unified comparative investigation of prosociality.
Estimating the risk of developing bipolar disorder (BD) in children and adolescents (C&A) with depressive disorders is important to optimize prevention and early intervention efforts. We aimed to quantitatively examine the risk of developing BD from depressive disorders and identify factors which moderate this development.
Methods
In this systematic review and meta-analysis (PROSPERO:CRD42023431301), PubMed and Web-of-Science databases were searched for longitudinal studies reporting the percentage of C&A with ICD/DSM-defined depressive disorders who developed BD during follow-up. Data extraction, random-effects meta-analysis, between-study heterogeneity analysis, quality assessment, sub-group analyses, and meta-regressions were conducted.
Results
Thirty-nine studies were included, including 72,371 individuals (mean age=13.9 years, 57.1% females); 14.7% of C&A with a depressive disorder developed BD after 20.4–288 months: 9.5% developed BD-I (95% CI=4.7 to 18.1); 7.7% developed BD-II (95% CI=3.2% to 17.3%); 19.8% (95% CI=9.9% to 35.6%) of C&A admitted into the hospital with a depressive disorder developed BD. Studies using the DSM (21.6%, 95% CI=20.2% to 23.1%) and studies evaluating C&A with a major depressive disorder only (19.8%, 95% CI=16.8% to 23.1%) found higher rates of development of BD. Younger age at baseline, a history of hospitalization and recruitment from specialized clinics were associated with an increased risk of developing BD at follow-up. Quality of included studies was good in 76.9% of studies.
Conclusions
There is a substantial risk of developing BD in C&A with depressive disorders. This is particularly the case for C&A with MDD, DSM-diagnosed depressive disorders, and C&A admitted into the hospital. Research exploring additional predictors and preventive interventions is crucial.
Foliar-applied postemergence applications of glufosinate are often applied to glufosinate-resistant crops to provide nonselective weed control without significant crop injury. Rainfall, air temperature, solar radiation, and relative humidity near the time of application have been reported to affect glufosinate efficacy. However, previous research may have not captured the full range of weather variability to which glufosinate may be exposed before or following application. Additionally, climate models suggest more extreme weather will become the norm, further expanding the weather range to which glufosinate can be exposed. The objective of this research was to quantify the probability of successful weed control (efficacy ≥85%) with glufosinate applied to some key weed species across a broad range of weather conditions. A database of >10,000 North American herbicide evaluation trials was used in this study. The database was filtered to include treatments with a single postemergence application of glufosinate applied to waterhemp [Amaranthus tuberculatus (Moq.) Sauer], morningglory species (Ipomoea spp.), and/or giant foxtail (Setaria faberi Herrm.) <15 cm in height. These species were chosen because they are well represented in the database and listed as common and troublesome weed species in both corn (Zea mays L.) and soybean [Glycine max (L.) Merr.] (Van Wychen 2020, 2022). Individual random forest models were created. Low rainfall (≤20 mm) over the 5 d before glufosinate application was detrimental to the probability of successful control of A. tuberculatus and S. faberi. Lower relative humidity (≤70%) and solar radiation (≤23 MJ m−1 d−1) on the day of application reduced the probability of successful weed control in most cases. Additionally, the probability of successful control decreased for all species when average air temperature over the first 5 d after application was ≤25 C. As climate continues to change and become more variable, the risk of unacceptable control of several common species with glufosinate is likely to increase.
Foliar-applied postemergence herbicides are a critical component of corn (Zea mays L.) and soybean [Glycine max (L.) Merr.] weed management programs in North America. Rainfall and air temperature around the time of application may affect the efficacy of herbicides applied postemergence in corn or soybean production fields. However, previous research utilized a limited number of site-years and may not capture the range of rainfall and air temperatures that these herbicides are exposed to throughout North America. The objective of this research was to model the probability of achieving successful weed control (≥85%) with commonly applied postemergence herbicides across a broad range of environments. A large database of more than 10,000 individual herbicide evaluation field trials conducted throughout North America was used in this study. The database was filtered to include only trials with a single postemergence application of fomesafen, glyphosate, mesotrione, or fomesafen + glyphosate. Waterhemp [Amaranthus tuberculatus (Moq.) Sauer], morningglory species (Ipomoea spp.), and giant foxtail (Setaria faberi Herrm.) were the weeds of focus. Separate random forest models were created for each weed species by herbicide combination. The probability of successful weed control deteriorated when the average air temperature within the first 10 d after application was <19 or >25 C for most of the herbicide by weed species models. Additionally, drier conditions before postemergence herbicide application reduced the probability of successful control for several of the herbicide by weed species models. As air temperatures increase and rainfall becomes more variable, weed control with many of the commonly used postemergence herbicides is likely to become less reliable.
Motor neuron disease (MND) is a progressive, fatal, neurodegenerative condition that affects motor neurons in the brain and spinal cord, resulting in loss of the ability to move, speak, swallow and breathe. Acceptance and commitment therapy (ACT) is an acceptance-based behavioural therapy that may be particularly beneficial for people living with MND (plwMND). This qualitative study aimed to explore plwMND’s experiences of receiving adapted ACT, tailored to their specific needs, and therapists’ experiences of delivering it.
Method:
Semi-structured qualitative interviews were conducted with plwMND who had received up to eight 1:1 sessions of adapted ACT and therapists who had delivered it within an uncontrolled feasibility study. Interviews explored experiences of ACT and how it could be optimised for plwMND. Interviews were audio recorded, transcribed and analysed using framework analysis.
Results:
Participants were 14 plwMND and 11 therapists. Data were coded into four over-arching themes: (i) an appropriate tool to navigate the disease course; (ii) the value of therapy outweighing the challenges; (iii) relevance to the individual; and (iv) involving others. These themes highlighted that ACT was perceived to be acceptable by plwMND and therapists, and many participants reported or anticipated beneficial outcomes in the future, despite some therapeutic challenges. They also highlighted how individual factors can influence experiences of ACT, and the potential benefit of involving others in therapy.
Conclusions:
Qualitative data supported the acceptability of ACT for plwMND. Future research and clinical practice should address expectations and personal relevance of ACT to optimise its delivery to plwMND.
Key learning aims
(1) To understand the views of people living with motor neuron disease (plwMND) and therapists on acceptance and commitment therapy (ACT) for people living with this condition.
(2) To understand the facilitators of and barriers to ACT for plwMND.
(3) To learn whether ACT that has been tailored to meet the specific needs of plwMND needs to be further adapted to potentially increase its acceptability to this population.
We recently reported on the radio-frequency attenuation length of cold polar ice at Summit Station, Greenland, based on bi-static radar measurements of radio-frequency bedrock echo strengths taken during the summer of 2021. Those data also allow studies of (a) the relative contributions of coherent (such as discrete internal conducting layers with sub-centimeter transverse scale) vs incoherent (e.g. bulk volumetric) scattering, (b) the magnitude of internal layer reflection coefficients, (c) limits on signal propagation velocity asymmetries (‘birefringence’) and (d) limits on signal dispersion in-ice over a bandwidth of ~100 MHz. We find that (1) attenuation lengths approach 1 km in our band, (2) after averaging 10 000 echo triggers, reflected signals observable over the thermal floor (to depths of ~1500 m) are consistent with being entirely coherent, (3) internal layer reflectivities are ≈–60$\to$–70 dB, (4) birefringent effects for vertically propagating signals are smaller by an order of magnitude relative to South Pole and (5) within our experimental limits, glacial ice is non-dispersive over the frequency band relevant for neutrino detection experiments.
At the start of a new community perinatal mental health service in Scotland we sought the opinions and aspirations of professional and lay stakeholders. A student elective project supported the creation of an anonymous 360-degree online survey of a variety of staff and people with lived experience of suffering from or managing perinatal mental health problems. The survey was designed and piloted with trainees and volunteer patients.
Results
A rich variety of opinions was gathered from the 60 responses, which came from a reasonably representative sample. Respondents provided specific answers to key questions and wrote free-text recommendations and concerns to inform service development.
Clinical implications
There is clear demand for the new expanded service, with strong support for provision of a mother and baby unit in the North of Scotland. The digital survey method could be adapted to generate future surveys to review satisfaction with service development and generate ideas for further change.
Clinical trials that fail prematurely due to poor design are a waste of resources and deprives us of data for evaluating potentially effective interventions. This study used machine learning modelling to predict clinical trials’ success or failure and to understand feature contributions driving this result. Features to power the modelling were engineered using data collected from the National Institute for Health and Care Research Innovation Observatory’s ScanMedicine database.
Methods
Using ScanMedicine, a large dataset containing 641,079 clinical trial records from 11 global clinical trial registries, was extracted. Sixteen features were generated from the data based on fields relating to trial design and eligibility. Trials were labeled positive if they were completed (or target recruitment was achieved) or negative if terminated/withdrawn (or target recruitment was not achieved). To achieve optimal performance, phase-specific datasets were generated, and we focused on a subsample of Phase 2 trials (n=70,167). Ensemble models using bagging and boosting algorithms, including balanced random forest and extreme gradient boosting classifiers were used for training and evaluating predictive performance. Shapley Additive Explanations was used to explain the output of the best performing model and calculate feature contributions for individual studies.
Results
We achieved a weighted F1-score of 0.88, Receiver Operator Characteristic Area under the Curve score of 0.75, and balanced accuracy of 0.75 on the test set with the xgBoost model. This result shows that the model can successfully distinguish between classes to predict if a trial will succeed or fail and subsequently output the features driving this outcome. The number of primary outcomes, whether the study was randomized, target sample size and number of exclusion criteria were the most important features affecting the model’s prediction.
Conclusions
This study is the first to use predictive modelling on a large sample of clinical trial data obtained from 11 international trial registries. The prediction outcomes achieved by our novel approach, which uses phase-specific trained models, outperforms previous modelling in this space.
The introduction pursues three aims: it examines the problems of representing beginnings in literary history in general; it explains the value of a literary history that focuses on the beginnings, rather than the subsequent development, of vernacular literatures in medieval Europe; and it describes the advantages of a comparative approach. In respect of the first aim, it argues that we should neither posit a unitary beginning for literature in any language nor think in terms of causes and effects: every literary tradition passes through multiple moments of incipience and opening, and their study reveals conditions of possibility, not mechanistic causes. Second, a concentration on beginning obliges us to define what begins, thereby bringing to light the distinctive features of each vernacular literature. Third, the comparative perspective reveals the matrix of defining characteristics that the nascent European literatures of the Middle Ages all share: their manuscript materiality, their institutionalization in systems of textual practice which confer stability and persistence in space and time, and, finally, their linguistic vernacularity, which defines them over against their respective ‘parental’ literacies in Latin, Greek, or Church Slavonic.
This chapter surveys the formation of German vernacular literature between the ninth and thirteenth centuries. Instead of one single beginning, from which all the rest flows, we encounter a series of inaugural gestures and moments of inception, not all of which extended into posterity. The great monuments of Old High German literature, produced in the ninth century, are isolated works that did not give rise to continuous traditions of textual production; for that development, we have to wait until the second half of the eleventh century, when an astonishingly self-assured and formally sophisticated literature – now linguistically Middle High German – burst onto the scene. In the course of the twelfth century, religious genres were joined by secular ones, and the pragmatic functions of informing and instructing the public were supplemented by an interest in the potentialities of poetic language and distinctly literary modes of cognition. Finally, by the early thirteenth century, a palpable sense had emerged among collectors and authors that German literature has both a canon and a history; the constitution of manuscript anthologies and literary genealogies represents a further beginning in the formation of German literature as a dynamic system, as well as itself positing beginnings.
How did new literatures begin in the Middle Ages and what does it mean to ask about such beginnings? These are the questions this volume pursues across the regions and languages of medieval Europe, from Iceland, Scandinavia, and Iberia through Irish, Welsh, English, French, Dutch, Occitan, German, Italian, Czech, and Croatian to Medieval Greek and the East Slavonic of early Rus. Focusing on vernacular scripted cultures and their complicated relationships with the established literary cultures of Latin, Greek, and Church Slavonic, the volume's contributors describe the processes of emergence, consolidation, and institutionalization that make it possible to speak of a literary tradition in any given language. Moreover, by concentrating on beginnings, the volume avoids the pitfalls of viewing earlier phenomena through the lens of later, national developments; the result is a heightened sense of the historical contingency of categories of language, literature, and territory in the space we call 'Europe'.