We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Increasingly, secure forensic mental health services must balance reducing restrictive practices on one hand with keeping a violence free environment on the other. Nursing staff and other hospital staff have the right to work in a safe environment. They should not be subject to intimidation and assaults in the work setting. Patients have the right to care in a safe environment and they need to have confidence that staff members can keep them safe during their in-patient stay. Minimising in-patient violence and minimising past violence for forensic patients is undermining an area of significant treatment need and may seriously limit the patient’s chance of a future successful discharge in the community. We posit in this chapter that active and careful management of ward milieu and dynamics, and active treatment of psychotic and other symptoms, together with proportionate use only of restrictive practice and thorough evaluation of any and all restrictive practice is the most effective way of managing a forensic in-patient setting to effectively reduce and prevent incidents of violence.
OBJECTIVES/GOALS: The COVID-19 pandemic disrupted HIV care, though it prompted preventive measures for respiratory pathogens, particularly among PWH. We therefore quantified trends in respiratory ADE incidence during vs. before the COVID-19 pandemic to assess effects of these measures on non-COVID-19 illnesses. METHODS/STUDY POPULATION: We included PWH aged ≥18 years in care at the Vanderbilt Comprehensive Care Clinic in Nashville, Tennessee from 2017-2023. Individuals contributed time from the last of March 31, 2017 or clinic enrollment until the first of death, March 31, 2023 (study close), or final clinic visit (if there was no visit ≤12 months before study close). We described respiratory ADE incidences (per 1,000 person-years) in each year of the study; we used Poisson regression with robust variance to estimate the incidence rate ratio (IRR) and 95% confidence interval (CI) for respiratory ADEs in the three years following vs. before the World Health Organization’s pandemic designation for COVID-19 (March 2020). RESULTS/ANTICIPATED RESULTS: Among 4,880 persons contributing 19,510 person-years, 69 (1.4%) developed ≥1 respiratory ADE. Median age at cohort entry was 42.6 (interquartile range [IQR]: 32.1, 52.3) years and at first respiratory ADE was 43.6 (IQR: 36.1, 51.2) years. The overall average respiratory ADE incidence in the pre-pandemic period (March 2017-March 2020) was 4.5 (95% CI: 3.3-6.3) per 1,000 person-years and during the post-pandemic period (April 2020-March 2023) was 4.1 (95% CI: 1.8-9.0) per 1,000 person-years. When accounting for repeated outcomes and annual variation, the modeled respiratory ADE incidence was 10% lower (IRR=0.9, 95% CI: 0.6-1.4) during vs. before the COVID-19 pandemic. DISCUSSION/SIGNIFICANCE: Respiratory ADE incidence dropped 10% following the COVID-19 pandemic declaration, though the confidence interval for this change contains the null. It is plausible that nonpharmaceutical COVID-19 mitigation measures drove a brief but impermanent decline, though further research is needed to assess whether diagnostic biases also played a role.
This chapter formulates a taxonomy to classify the different kinds of camera-operating victim characters in found footage horror films: amateurs, students and cinematographers. It considers some key theoretical notions, including priming, Murray Smith’s structure of sympathy and how off-screen space is articulated in found footage horror films. In order to develop the taxonomy, this chapter examines how viewers respond to camera-operating victims in several key examples, including Man Bites Dog (Belvaux et al., 1992), Hollow (Axelgaard, 2011), Grave Encounters 2 (Poliquin, 2012), The Asylum Tapes (Stone, 2012), Afflicted (Lee and Prowse, 2013), Devil’s Due (Bettinelli-Olpin and Gillett, 2014) and Blair Witch (Wingard, 2016).
Found footage horror films feature diegetic camera operators that will usually become the victims of whatever antagonistic force they are recording during the events of the narrative. Since Heather Donahue, Joshua Leonard and Mike Williams went camping in the woods of Maryland in The Blair Witch Project (Myrick and Sanchez, 1999), the found footage narrational technique has become ubiquitous in the horror genre. Found footage horror films are perfect examples of what Thomas M. Sipos calls “pragmatic aesthetic … when a filmmaker puts technical and budgetary compromises to artistic effect” (29). Although the diegetic camera technique affects the way that viewers recognise and empathise with all of these victims, the ways that these operators use their cameras is often different. Therefore, there is a need to construct a taxonomy of how these victims use their diegetic cameras. This taxonomy will classify the different kinds of camera-operating victim characters in found footage horror films: amateurs, students and cinematographers. The different ways that camera operators approach cinematography must be carefully constructed in the opening scenes of the film in order to prime the viewer for a particular experience, and for a particular form of what I have previously labelled “mediated realism” (Turner, Found Footage 9). Amateurs, students and cinematographers have varying degrees of interaction with their profilmic subjects, and there are therefore varying degrees to which they are recognised as characters by the viewer. The more vocal they are as off-screen characters, the more the viewer is required to imagine off-screen space.
In order to create and elucidate this taxonomy, it is first necessary to consider some key theoretical notions.
We investigate a variety of cut and choose games, their relationship with (generic) large cardinals, and show that they can be used to characterize a number of properties of ideals and of partial orders: certain notions of distributivity, strategic closure, and precipitousness.
The Altar Stone at Stonehenge in Wiltshire, UK, is enigmatic in that it differs markedly from the other bluestones. It is a grey–green, micaceous sandstone and has been considered to be derived from the Old Red Sandstone sequences of South Wales. Previous studies, however, have been based on presumed derived fragments (debitage) that have been identified visually as coming from the Altar Stone. Portable X-ray fluorescence (pXRF) analyses were conducted on these fragments (ex situ) as well as on the Altar Stone (in situ). Light elements (Z<37) in the Altar Stone analyses, performed after a night of heavy rain, were affected by surface and pore water that attenuate low energy X-rays, however the dry analyses of debitage fragments produced data for a full suite of elements. High Z elements, including Zr, Nb, Sr, Pb, Th and U, all occupy the same compositional space in the Altar Stone and debitage fragments, and are statistically indistinguishable, indicating the fragments are derived from the Altar Stone. Barium compares very closely between the debitage and Altar Stone, with differences being related to variable baryte distribution in the Altar Stone, limited accessibility of its surface for analysis, and probably to surface weathering.
A notable feature of the Altar Stone sandstone is the presence of baryte (up to 0.8 modal%), manifest as relatively high Ba in both the debitage and the Altar Stone. These high Ba contents are in marked contrast with those in a small set of Old Red Sandstone field samples, analysed alongside the Altar Stone and debitage fragments, raising the possibility that the Altar Stone may not have been sourced from the Old Red Sandstone sequences of Wales. This high Ba ‘fingerprint’, related to the presence of baryte, may provide a rapid test using pXRF in the search for the source of the Stonehenge Altar Stone.
The coronavirus disease (COVID-19) pandemic has had profound consequences on collective mental health and well-being, and yet, older adults appear better off than younger adults. The current study examined mental health impacts of the pandemic across adult age groups in a large sample (n = 5,320) of Canadians using multiple hierarchical regression analyses. Results suggest older adults are experiencing better mental health and more social connectedness relative to younger adults. Loneliness predicted negative mental health outcomes across all age groups, while the negative association between social support and mental health was only significant at average and high levels of loneliness in the 65–69 age group. Results point towards differential mental health impacts of the pandemic across adult age groups and indicate that loneliness and social support may be key intervention targets during the COVID-19 pandemic. Future research should further examine mechanisms of resiliency among older Canadian adults during the pandemic.
Accurately dating the creation and development of earthwork features is a long-standing problem for archaeologists. This article presents results from Bosigran (Cornwall, UK), where boundary banks believed to be prehistoric in origin are assessed using optically stimulated luminescence profiling and dating (OSL-PD). The results provide secure construction dates for different boundaries in the Bronze and Iron Ages, as well as chronologies for their early medieval and later development. The research demonstrates not only the prehistoric origins of these distinctive Cornish field systems, but also a practical and cost-effective methodology suitable for dating earthworks around the world.
Item 9 of the Patient Health Questionnaire-9 (PHQ-9) queries about thoughts of death and self-harm, but not suicidality. Although it is sometimes used to assess suicide risk, most positive responses are not associated with suicidality. The PHQ-8, which omits Item 9, is thus increasingly used in research. We assessed equivalency of total score correlations and the diagnostic accuracy to detect major depression of the PHQ-8 and PHQ-9.
Methods
We conducted an individual patient data meta-analysis. We fit bivariate random-effects models to assess diagnostic accuracy.
Results
16 742 participants (2097 major depression cases) from 54 studies were included. The correlation between PHQ-8 and PHQ-9 scores was 0.996 (95% confidence interval 0.996 to 0.996). The standard cutoff score of 10 for the PHQ-9 maximized sensitivity + specificity for the PHQ-8 among studies that used a semi-structured diagnostic interview reference standard (N = 27). At cutoff 10, the PHQ-8 was less sensitive by 0.02 (−0.06 to 0.00) and more specific by 0.01 (0.00 to 0.01) among those studies (N = 27), with similar results for studies that used other types of interviews (N = 27). For all 54 primary studies combined, across all cutoffs, the PHQ-8 was less sensitive than the PHQ-9 by 0.00 to 0.05 (0.03 at cutoff 10), and specificity was within 0.01 for all cutoffs (0.00 to 0.01).
Conclusions
PHQ-8 and PHQ-9 total scores were similar. Sensitivity may be minimally reduced with the PHQ-8, but specificity is similar.
OBJECTIVES/SPECIFIC AIMS: Previous research suggests that weight loss during early TB treatment (first two months of anti-TB therapy) is a predictor of poor tuberculosis (TB) treatment outcomes among HIV-negative populations, but the relationship has not been well studied in the context of HIV. We examined the association between HIV and weight change during the first two months of anti-tuberculosis treatment, and also assessed the effects of HIV and early weight change on tuberculosis (TB) treatment outcomes. METHODS/STUDY POPULATION: Adults with culture-confirmed, drug-susceptible, pulmonary TB, regardless of HIV status, were enrolled into the Regional Prospective Observational Research for Tuberculosis (RePORT)-Brazil cohort and followed on standard anti-TB therapy. For the primary analysis, we compared weight change in persons living with HIV (PLWH) and HIV-negative patients between baseline and two months using multivariable bootstrapped quantile regression and modified Poisson regression. For secondary analysis, we examined the separate effects of HIV and weight change on poor TB treatment outcome (treatment failure, TB recurrence, or death) using Cox proportional hazards regression. RESULTS/ANTICIPATED RESULTS: Among 323 participants, 45 (14%) were HIV-positive. On average, PLWH lost 0.7% (interquartile range (IQR): −5.1%, 4.4%) of their baseline body weight between baseline and two months; those without HIV gained 3.5% (IQR: 0.8%, 6.7%). After adjusting for age, sex, and baseline BMI, PLWH lost 4.1% (95% confidence interval (CI): −6.5%, −1.6%) more weight during the first two months of anti-TB treatment than HIV-negative individuals. HIV infection was associated with weight loss ≥5% (adjusted odds ratio = 9.3; 95% CI: 4.2-20.6). Regarding the secondary analysis, 14 patients had a poor TB treatment outcome: 2 treatment failures, 4 cases of recurrent TB, and 8 deaths. PLWH and patients who lost ≥5% weight had significantly increased risk of poor TB treatment outcome with hazard ratios of 8.77 (95% CI: 2.96-25.94) and 4.09 (95% CI: 1.11-15.14), respectively. DISCUSSION/SIGNIFICANCE OF IMPACT: Our results suggest that HIV is associated with weight loss during early TB treatment, and both HIV and early weight loss were associated with poor treatment outcome. Future research should examine the potential etiologies of these findings and identify the types of interventions that would best promote weight gain during TB treatment, especially among PLWH, in order to prevent poor TB treatment outcomes.
Previous studies on reporting bias generally examined whether trials were published in stand-alone publications. In this study, we investigated whether pooled-trials publications constitute a specific form of reporting bias. We assessed whether negative trials were more likely to be exclusively published in pooled-trials publications than positive trials and examined the research questions, individual trial results, and conclusions presented in these articles.
Methods
Data from a cohort of 105 randomized controlled trials of 16 antidepressants were extracted from earlier publications and the corresponding Food and Drug Administration (FDA) reviews. A systematic literature search was conducted to identify pooled-trials publications.
Results
We found 107 pooled-trials publications that reported results of 23 (72%) of 32 trials not published in stand-alone publications. Only two (3.8%) of 54 positive trials were published exclusively in pooled-trials publications, compared with 21 (41.1%) of 51 negative trials (p < 0.001). Thirteen (12%) of 107 publications had as primary aim to present data on the trial's primary research question (drug efficacy compared with placebo). Only four of these publications, reporting on five (22%) trials, presented individual efficacy data for the primary research question. Additionally, only five (5%) of 107 pooled-trials publications had a negative conclusion.
Conclusions
Compared with positive trials, negative trials of antidepressants for depression were much more likely to be reported exclusively in pooled-trials publications. Pooled-trials publications flood the evidence base with often-redundant articles that, instead of addressing the original primary research question, present (positive) results on secondary questions. Therefore, pooled-trials publications distort the apparent risk–benefit profile of antidepressants.
The mental and physical health of individuals with a psychotic illness are typically poor. Access to psychosocial interventions is important but currently limited. Telephone-delivered interventions may assist. In the current systematic review, we aim to summarise and critically analyse evidence for telephone-delivered psychosocial interventions targeting key health priorities in adults with a psychotic disorder, including (i) relapse, (ii) adherence to psychiatric medication and/or (iii) modifiable cardiovascular disease risk behaviours.
Methods
Ten peer-reviewed and four grey literature databases were searched for English-language studies examining psychosocial telephone-delivered interventions targeting relapse, medication adherence and/or health behaviours in adults with a psychotic disorder. Study heterogeneity precluded meta-analyses.
Results
Twenty trials [13 randomised controlled trials (RCTs)] were included, involving 2473 participants (relapse prevention = 867; medication adherence = 1273; and health behaviour = 333). Five of eight RCTs targeting relapse prevention and one of three targeting medication adherence reported at least 50% of outcomes in favour of the telephone-delivered intervention. The two health-behaviour RCTs found comparable levels of improvement across treatment conditions.
Conclusions
Although most interventions combined telephone and face-to-face delivery, there was evidence to support the benefit of entirely telephone-delivered interventions. Telephone interventions represent a potentially feasible and effective option for improving key health priorities among people with psychotic disorders. Further methodologically rigorous evaluations are warranted.
Different diagnostic interviews are used as reference standards for major depression classification in research. Semi-structured interviews involve clinical judgement, whereas fully structured interviews are completely scripted. The Mini International Neuropsychiatric Interview (MINI), a brief fully structured interview, is also sometimes used. It is not known whether interview method is associated with probability of major depression classification.
Aims
To evaluate the association between interview method and odds of major depression classification, controlling for depressive symptom scores and participant characteristics.
Method
Data collected for an individual participant data meta-analysis of Patient Health Questionnaire-9 (PHQ-9) diagnostic accuracy were analysed and binomial generalised linear mixed models were fit.
Results
A total of 17 158 participants (2287 with major depression) from 57 primary studies were analysed. Among fully structured interviews, odds of major depression were higher for the MINI compared with the Composite International Diagnostic Interview (CIDI) (odds ratio (OR) = 2.10; 95% CI = 1.15–3.87). Compared with semi-structured interviews, fully structured interviews (MINI excluded) were non-significantly more likely to classify participants with low-level depressive symptoms (PHQ-9 scores ≤6) as having major depression (OR = 3.13; 95% CI = 0.98–10.00), similarly likely for moderate-level symptoms (PHQ-9 scores 7–15) (OR = 0.96; 95% CI = 0.56–1.66) and significantly less likely for high-level symptoms (PHQ-9 scores ≥16) (OR = 0.50; 95% CI = 0.26–0.97).
Conclusions
The MINI may identify more people as depressed than the CIDI, and semi-structured and fully structured interviews may not be interchangeable methods, but these results should be replicated.
Declaration of interest
Drs Jetté and Patten declare that they received a grant, outside the submitted work, from the Hotchkiss Brain Institute, which was jointly funded by the Institute and Pfizer. Pfizer was the original sponsor of the development of the PHQ-9, which is now in the public domain. Dr Chan is a steering committee member or consultant of Astra Zeneca, Bayer, Lilly, MSD and Pfizer. She has received sponsorships and honorarium for giving lectures and providing consultancy and her affiliated institution has received research grants from these companies. Dr Hegerl declares that within the past 3 years, he was an advisory board member for Lundbeck, Servier and Otsuka Pharma; a consultant for Bayer Pharma; and a speaker for Medice Arzneimittel, Novartis, and Roche Pharma, all outside the submitted work. Dr Inagaki declares that he has received grants from Novartis Pharma, lecture fees from Pfizer, Mochida, Shionogi, Sumitomo Dainippon Pharma, Daiichi-Sankyo, Meiji Seika and Takeda, and royalties from Nippon Hyoron Sha, Nanzando, Seiwa Shoten, Igaku-shoin and Technomics, all outside of the submitted work. Dr Yamada reports personal fees from Meiji Seika Pharma Co., Ltd., MSD K.K., Asahi Kasei Pharma Corporation, Seishin Shobo, Seiwa Shoten Co., Ltd., Igaku-shoin Ltd., Chugai Igakusha and Sentan Igakusha, all outside the submitted work. All other authors declare no competing interests. No funder had any role in the design and conduct of the study; collection, management, analysis and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
Syphacia stroma (von Linstow, 1884) Morgan, 1932 and Syphacia frederici Roman, 1945 are oxyurid nematodes that parasitize two murid rodents, Apodemus sylvaticus and Apodemus flavicollis, on the European mainland. Only S. stroma has been recorded previously in Apodemus spp. from the British Isles. Despite the paucity of earlier reports, we identified S. frederici in four disparate British sites, two in Nottinghamshire, one each in Berkshire and Anglesey, Wales. Identification was based on their site in the host (caecum and not small intestine), on key morphological criteria that differentiate this species from S. stroma (in particular the tail of female worms) and by sequencing two genetic loci (cytochrome C oxidase 1 gene and a section of ribosomal DNA). Sequences derived from both genetic loci of putative British S. frederici isolates formed a tight clade with sequences from continental worms known to be S. frederici, clearly distinguishing these isolates from S. stroma which formed a tight clade of its own, distinct from clades representative of Syphacia obvelata from Mus and S. muris from Rattus. The data in this paper therefore constitute the first record of S. frederici from British wood mice, and confirm the status of this species as distinct from both S. obvelata and S. stroma.
U60 ([UO2(O2)(OH)]6060− in water) is a uranyl peroxide nanocluster with a fullerene topology and Oh symmetry. U60 clusters can exist in crystalline solids or in liquids; however, little is known of their behavior at high pressures. We compressed the U60-bearing material: Li68K12(OH)20[UO2(O2)(OH)]60(H2O)310 ($Fm\bar 3$; a = 37.884 Å) in a diamond anvil cell to determine its response to increasing pressure. Three length scales and corresponding structural features contribute to the compression response: uranyl peroxide bonds (<0.5 nm), isolated single nanoclusters (2.5 nm), and the long-range periodicity of nanoclusters within the solid (>3.7 nm). Li68K12(OH)20[UO2(O2)(OH)]60(H2O)310 transformed to a tetragonal structure below 2 GPa and irreversibly amorphized between 9.6 and 13 GPa. The bulk modulus of the tetragonal U60-bearing material was 25 ± 2 GPa. The pressure-induced amorphous phase contained intact U60 clusters, which were preserved beyond the loss of long-range periodicity. The persistence of U60 clusters at high pressure may have been enhanced by the interaction between U60 nanoclusters and the alcohol pressure medium. Once formed, U60 nanoclusters persist regardless of their associated long-range ordering—in crystals, amorphous solids, or solutions.
Puccinia obtegens (Link) Tul., an autoecious rust pathogen, is a potential biological control agent of Canada thistle [Cirsium arvense (L.) Scop.]. Ten ecotypes of Canada thistle were inoculated with uredospores of P. obtegens and sporulation was observed on all ecotypes. Infection types varied among and within ecotypes, indicating that host-resistance is one factor limiting rust infection. No correlation was found between Canada thistle susceptibility to the rust and host plant ecotype classification, stomatal density, amount of leaf pubescence, or spore germinability on leaf surfaces.
At some point during our inaugural research team workshop we started to generate many different ideas about how to increase participation in heritage decision-making. We tried to keep track as the questions flowed by writing recurring words on pieces of paper, to be linked, connected and ordered at some later point. The words were in some ways not surprising. Heritage, of course. Stewardship. Custodianship. Expert. Leadership. Institutions. Ownership. Differences/Tensions. Scale. Personal. Values. Voice (‘+ not heard’, was added in another hand in biro). So far, so predictable. These words, after all, index the big conceptual challenges that have been identified to a greater or lesser extent in heritage policy, practice and its research for the last four decades. Yet as we spoke, each of these terms started to change in dimension. As the different people around the table gave examples, and checked they understood each other's contributions, the familiar words were in the process of gathering new uncertainties and ambiguities as well as new colours, textures, shapes and potentials.
We were brought together by a funding scheme that supported not just collaborative research, but also its collaborative design. While we did have a shared interest in our overall question ‘how should heritage decisions be made?’, we – as you will see by how we describe ourselves – came to this question, and our first workshop, from quite different places and different trajectories. To frame it in the language implied by this book, we carried with us different inheritances – legacies – from our disciplines, professional backgrounds, organisations and places. As such, the other crucial thing we had in common was an interest in the potential for rethinking ‘heritage’ offered by drawing on many different perspectives and working across hierarchies and institutional boundaries. We used both these shared commitments and our different perspectives to collaboratively design our project.
In this chapter we tell the story of our project with the aim of showing how our research emerged through dynamic connections between know-how generated through practitioner reflections, dialogue, characterised by conversations between us as a project team and conceptual innovation, in terms of the way this allowed us to think about heritage and decision making differently.
Recognition that alien plants pose a significant threat to biodiversity has not always translated into effective management strategies, policy reforms, and systems to establish priorities. Thus, many alien plant management decisions for the protection of biodiversity occur with limited knowledge of what needs to be protected (other than biodiversity in a generalized sense) or the urgency of actions. To rectify this, we have developed a triage system that enables alien plant management decisions to be made based on (1) the urgency of control relative to the degree of threat posed to biodiversity, compared with (2) the likelihood of achieving a successful conservation outcome as a result of alien plant control. This triage system is underpinned by a two-step approach, which identifies the biodiversity at risk and assesses sites to determine priorities for control. This triage system was initially developed to manage the threat posed by bitou bush to native species in New South Wales (NSW), Australia. It has subsequently been improved with the national assessment of lantana in Australia, and the adaptation from a single to multiple alien plant species approach on a regional scale. This triage system identifies nine levels of priority for alien plant management aimed at biodiversity conservation, ranging from immediate, targeted action to limited or no action. The development of this approach has enabled long-term management priorities to be set for widespread alien plants that are unlikely to be eradicated. It also enables control to occur in a coordinated manner for biodiversity conservation at a landscape scale, rather than as a series of individual unconnected short-term actions.
Bridal creeper has become a serious environmental weed in southern Australia. Historically the invaded areas had low soil nutrient levels. However, our field surveys indicate that soils in bridal creeper–invaded areas have higher phosphorus and iron levels than soils in nearby native reference areas regardless of the proximity to agriculture or other disturbances. A glasshouse experiment was undertaken to determine the influence of increased nutrients on plants that co-occur with bridal creeper in order to (1) assess the impact of changed soil conditions and (2) predict the response of dominant species following the biological control of bridal creeper. The relative growth rate (RGR) of bridal creeper, two native shrubs (narrow-leaved thomasia [Thomasia angustifolia] and bluebell creeper [Billardiera heterophylla]), and an invasive exotic grass (annual veldt grass [Ehrharta longiflora]) were determined in three soil types: soil collected within a bridal creeper stand, soil collected from a nearby reference area, and a potting mix with nutrient levels higher than that recorded in the field. The plant species were chosen due to their association with bridal creeper. For example, the native species narrow-leaved thomasia was identified in a previous survey as the most abundant shrub at the invaded site where the soil was collected. The two other species, bluebell creeper and annual veldt grass, were identified from a previous seedbank trial as being abundant (in the seedbank) and able to readily germinate in invaded areas. When grown in either the bridal creeper–invaded soil or reference soil, bluebell creeper had significantly lower RGRs than narrow-leaved thomasia and annual veldt grass. However, as all these species showed increases in RGRs between reference soil and bridal creeper soil, this study indicates that for at least these three species the impact of increased nutrients may not be a barrier to the recovery of invaded areas following the control of bridal creeper.