We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Previous studies identified clusters of first-episode psychosis (FEP) patients based on cognition and premorbid adjustment. This study examined a range of socio-environmental risk factors associated with clusters of FEP, aiming a) to compare clusters of FEP and community controls using the Maudsley Environmental Risk Score for psychosis (ERS), a weighted sum of the following risks: paternal age, childhood adversities, cannabis use, and ethnic minority membership; b) to explore the putative differences in specific environmental risk factors in distinguishing within patient clusters and from controls.
Methods
A univariable general linear model (GLS) compared the ERS between 1,263 community controls and clusters derived from 802 FEP patients, namely, low (n = 223) and high-cognitive-functioning (n = 205), intermediate (n = 224) and deteriorating (n = 150), from the EU-GEI study. A multivariable GLS compared clusters and controls by different exposures included in the ERS.
Results
The ERS was higher in all clusters compared to controls, mostly in the deteriorating (β=2.8, 95% CI 2.3 3.4, η2 = 0.049) and the low-cognitive-functioning cluster (β=2.4, 95% CI 1.9 2.8, η2 = 0.049) and distinguished them from the cluster with high-cognitive-functioning. The deteriorating cluster had higher cannabis exposure (meandifference = 0.48, 95% CI 0.49 0.91) than the intermediate having identical IQ, and more people from an ethnic minority (meandifference = 0.77, 95% CI 0.24 1.29) compared to the high-cognitive-functioning cluster.
Conclusions
High exposure to environmental risk factors might result in cognitive impairment and lower-than-expected functioning in individuals at the onset of psychosis. Some patients’ trajectories involved risk factors that could be modified by tailored interventions.
Scholarly work in American politics has yet to confront one of the nation’s starkest inequalities: lethal violence. The risk falls disproportionately on Black Americans, but much like poverty and inequality, lethal violence is a broadly American problem that African Americans are disproportionately likely to experience. The lack of attention to life-threatening violence has limited our understanding of race, criminal justice, and the nature of the American state. We draw on work in American political development and racial politics to extend a racialized state failure framework for understanding the United States as a high-violence society. Life-threatening violence declined dramatically in the nineteenth century in countries where state building involved the integrated consolidation of centralized violence monopolization and universal male suffrage. Such efforts faltered in the US, however, and violence thrived. We argue that this racialized state failure is the result of two reinforcing features of American politics: anti-transformative racial orders and institutional fragmentation. Fragmentation has long provided opportunities for anti-transformative racial orders to limit national intervention in violence control and enfranchisement, even during critical junctures when institutions are less determinate, and actions by decision makers are more likely to generate change. We illustrate the disruption of state building by racial orders, which minimized the state’s capacity to delegitimize violent self-help during two critical junctures in the US: Reconstruction and the crime wave of the mid- to late twentieth century. The resulting institutional configuration, which we refer to as forced localism, reinforces the jurisdictional authority of highly constrained state and local institutions in violence attenuation. The consequence is exceptionally high rates of serious violence and a harsh and exclusionary criminal justice system, with Black Americans exceptionally vulnerable to both.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
Stigma of mental health conditions hinders recovery and well-being. The Honest, Open, Proud (HOP) program shows promise in reducing stigma but there is uncertainty about the feasibility of a randomized trial to evaluate a peer-delivered, individual adaptation of HOP for psychosis (Let's Talk).
Methods
A multi-site, Prospective Randomized Open Blinded Evaluation (PROBE) design, feasibility randomised controlled trial (RCT) comparing the peer-delivered intervention (Let's Talk) to treatment as usual (TAU). Follow-up was 2.5 and 6 months. Randomization was via a web-based system, with permuted blocks of random size. Up to 10 sessions of the intervention over 10 weeks were offered. The primary outcome was feasibility data (recruitment, retention, intervention attendance). Primary outcomes were analyzed by intention to treat. Safety outcomes were reported by as treated status. The study was prospectively registered: https://doi.org/10.1186/ISRCTN17197043.
Results
149 patients were referred to the study and 70 were recruited. 35 were randomly assigned to intervention + TAU and 35 to TAU. Recruitment was 93% of the target sample size. Retention rate was high (81% at 2.5 months primary endpoint), and intervention attendance rate was high (83%). 21% of 33 patients in Let's talk + TAU had an adverse event and 16% of 37 patients in TAU. One serious adverse event (pre-randomization) was partially related and expected.
Conclusions
This is the first trial to show that it is feasible and safe to conduct a RCT of HOP adapted for people with psychosis and individual delivery. An adequately powered trial is required to provide robust evidence.
Positive, negative and disorganised psychotic symptom dimensions are associated with clinical and developmental variables, but differing definitions complicate interpretation. Additionally, some variables have had little investigation.
Aims
To investigate associations of psychotic symptom dimensions with clinical and developmental variables, and familial aggregation of symptom dimensions, in multiple samples employing the same definitions.
Method
We investigated associations between lifetime symptom dimensions and clinical and developmental variables in two twin and two general psychosis samples. Dimension symptom scores and most other variables were from the Operational Criteria Checklist. We used logistic regression in generalised linear mixed models for combined sample analysis (n = 875 probands). We also investigated correlations of dimensions within monozygotic (MZ) twin pairs concordant for psychosis (n = 96 pairs).
Results
Higher symptom scores on all three dimensions were associated with poor premorbid social adjustment, never marrying/cohabiting and earlier age at onset, and with a chronic course, most strongly for the negative dimension. The positive dimension was also associated with Black and minority ethnicity and lifetime cannabis use; the negative dimension with male gender; and the disorganised dimension with gradual onset, lower premorbid IQ and substantial within twin-pair correlation. In secondary analysis, disorganised symptoms in MZ twin probands were associated with lower premorbid IQ in their co-twins.
Conclusions
These results confirm associations that dimensions share in common and strengthen the evidence for distinct associations of co-occurring positive symptoms with ethnic minority status, negative symptoms with male gender and disorganised symptoms with substantial familial influences, which may overlap with influences on premorbid IQ.
Medusahead [Taeniatherum caput-medusae (L.) Nevski] is an invasive winter annual grass of western North American grasslands and rangelands that negatively impacts forage production, wildlife habitat, and ecosystem processes. Growth regulator herbicides, such as aminopyralid, applied in spring reduced invasive annual grass seed viability in greenhouse and California annual grassland experiments. Beginning in fall 2017, we tested combinations of sequential fall (preemergence) and spring (postemergence) aminopyralid applications at low (103 g ae ha−1) and high (206 g ae ha−1) rates at two ecologically distinct sites in the Intermountain West. Preemergence and postemergence aminopyralid applications at low and high rates controlled T. caput-medusae by 76% to 100% the second summer after study initiation. At the Utah site (which is warmer, drier, and more degraded than the Idaho site), the high rate resulted in better control. The first summer, postemergence aminopyralid applications at low and high rates reduced seed viability 47% to 91% compared with nontreated seeds, with the greatest reductions seen in Utah, which was experiencing drought. Across study sites, reduced T. caput-medusae germination in one year was linked to improved control the following year. The Idaho site also had desirable perennial grasses, which we used to investigate non-target effects. In general, there was a correlation between high T. caput-medusae control and higher perennial grass cover, indicating that successful control can make desirable perennial grasses more vigorous in this system. The option of a spring aminopyralid application increases the management window for controlling invasive annual grasses by decreasing seed viability, thereby depleting short-lived seedbanks.
Serious incident management and organisational learning are international patient safety priorities. Little is known about the quality of suicide investigations and, in turn, the potential for organisational learning. Suicide risk assessment is acknowledged as a complex phenomenon, particularly in the context of adult community mental health services. Root cause analysis (RCA) is the dominant investigative approach, although the evidence base underpinning RCA is contested, with little attention paid to the patient in context and their cumulative risk over time.
Results
Recent literature proposes a Safety-II approach in response to the limitations of RCA. The importance of applying these approaches within a mental healthcare system that advocates a zero suicide framework, grounded in a restorative just culture, is highlighted.
Clinical implications
Although integrative reviews and syntheses have clear methodological limitations, this approach facilitates the management of a disparate body of work to advance a critical understanding of patient safety in adult community mental healthcare.
Women with bipolar disorder have a high recurrence rate in the perinatal period. However, the use of prophylactic medication can be a concern during pregnancy and breastfeeding. There are few studies looking at the impact of prophylactic medication on the risk of recurrence.The aims of this study are to describe the use of medication in women with bipolar disorder in the perinatal period and the impact of that prophylactic medication on the rate of postnatal recurrence.
Methods
The BDRN (Bipolar Disorder Research Network Study) is the largest individual network of individuals with bipolar disorder and related mood disorders in the world. The BDRN pregnancy study is a prospective observational study which took place in the UK. We collected sociodemographic, clinical and medication data from pregnant women with a diagnosis of bipolar disorder and who were euthymic entering the postpartum period. The clinical data were collected via interviews during pregnancy and the postpartum and access to clinical records where those were available.
Data were analysed for association using χ2 tests and logistic regression.
Results
Our total sample for this analysis comprised of 103 women who met the criteria.
We found that 71 (70%) were taking medication at delivery: 43 (43%) antipsychotics, 9 (9%) antidepressants, 10 (10%) mood stabilisers, (6 lithium, 4 anticonvulsants and 9 multiple medication classes).
Of the total sample, 44 (43%) experienced a postpartum recurrence: 21 (20%) had an episode of postpartum psychosis, 15 (15%) of non-psychotic depression and 8 (8%) of hypomania. Of the postpartum psychotic episodes 11 were of mania with psychosis, 8 of mania without psychosis and 2 of psychotic depression.
There was no significant association between taking medication at delivery and postpartum recurrence χ2 (1)=0.116, p=0.73.
In a multivariable analysis there continued to be no association when adjusted for age, ethnicity, parity, severity (previous admissions, age at impairment, bipolar subtype) and previous psychotic symptoms aOR 1.35 95%CI [0.45; 4.00], p=0.59.
Conclusion
A high number of bipolar women are taking medication at delivery and in the majority, antipsychotics are prescribed. The postnatal recurrence rate in both medicated and unmedicated women is high.
Our findings align with recent electronic health records and observational studies, but differ from older clinical cohort and higher Lithium-prescribing sample studies. Limitations include the study design and confounding by indication. Further research in larger populations is necessary to inform clinical decision-making for women and their healthcare providers.
This book uses qualitative longitudinal data, from repeat interviews with people subject to compulsion and sanction in their everyday lives, to analyse the effectiveness and ethicality of welfare conditionality in promoting and sustaining behaviour change in the UK.
Using two composite case studies the following chapter outlines the intersection of legal and forensic pathways to justice for persons with developmental disabilities in Ontario, Canada. Their pathways include a number of junctures where decision making by different stakeholders across sectors is required pertaining to legal determinations of either criminal fitness to stand trial and culpability as well as the health care presence of a contributory mental illness or disorder. Despite having similar profiles, people with developmental disabilities can have vastly different access, processes, and outcomes depending upon a number of variables including legal factors such as the severity of the offence and offence history; and extralegal factors including support network, discretion of multiple decision makers, legal resources and jurisdiction. The pathways recognise the importance and need for ensuring equitable and therapeutic justice for such individuals.
Early surgical intervention in infants with complex CHD results in significant disruptions to their respiratory, gastrointestinal, and nervous systems, which are all instrumental to the development of safe and efficient oral feeding skills. Standardised assessments or treatment protocols are not currently available for this unique population, requiring the clinician to rely on knowledge based on neonatal literature. Clinicians need to be skilled at evaluating and analysing these systems to develop an appropriate treatment plan to improve oral feeding skill and safety, while considering post-operative recovery in the infant with complex CHD. Supporting the family to re-establish their parental role during the hospitalisation and upon discharge is critical to reducing parental stress and oral feeding success.
Within contemporary welfare states a principle of welfare conditionality links eligibility to publicly funded welfare benefits and services to individuals’ acceptance of state-specified compulsory responsibilities or particular patterns of required behaviour. When welfare conditionality is implemented in policy and practice it routinely involves the use of two core elements. First, compulsion, that is, the requirement that certain people mandatorily engage with specified packages of support, typically designed with the intention of moving them off social security benefits and into paid work, or tackling antisocial/ problematic behaviours. Second, the application of various types of sanction for non-compliance. Those people who do not engage as specified face the denial, or loss, of welfare benefits and services as a consequence of their failure to comply.
Internationally, across many and various types of welfare regimes, welfare conditionality has become a key part of the process of welfare reform (for example, Cox, 1998; Dwyer, 2010; Betzelt and Bothfeld, 2011; Baumberg Geiger, 2017). Influenced by New Right thinking (Mead, 1982, 1986), policymakers and governments from across the mainstream political spectrum have become convinced that the instrumental use of various combinations of sanction (sticks) and mandatory support (carrots) can effectively change citizens’ behaviour to reduce ‘welfare dependency’ and promote personal responsibility. In reality this instrumental behaviourism has been largely implemented in response to the conduct and dependency of poorer and marginalised citizens (Dwyer, 1998; Bray et al, 2014; Harrison and Saunders, 2016; Grymonprez et al, 2020) who rely on what Titmuss (1958) identified as ‘social welfare’ benefits and services to meet their needs. Enforcing behaviour change among more wealthy citizens (the consistent beneficiaries of fiscal and occupational welfare) appears to be less of a concern for governments with the preferred mode of behavioural intervention, ‘nudge’ (Thaler and Sunstein, 2008), less directive or potentially punitive.
Welfare states have always sought to promote particular values and specific types of preferred behaviour alongside the meeting of different types of need (Mann, 1992; Dean, 1998). The advance of welfare conditionality across a range of welfare states and policy sectors can be regarded as an attempt to resolve the long-standing tension between care and control, inherent within social welfare (Goroff, 1974; Brown, 2017), firmly in favour of the latter.
A regularly repeated justification of welfare conditionality within social security systems is the claim that it is effective in moving people off social welfare benefits and into paid employment. Within the UK, the advent of ‘in-work’ conditionality within Universal Credit (UC), has further seen mandatory work search activity and benefit sanctions identified as appropriate tools for enhancing ‘progression’ within the paid labour market (PLM) and simultaneously reducing low-paid workers’ reliance on rent and wage subsidies via paid welfare and taxation systems (see Chapters Two and Three; Jones et al, 2019; Wright and Dwyer, 2022). Drawing on analysis of the qualitative longitudinal data generated by WelCond this chapter explores these claims. The first part of the chapter maps and highlights the differing work and welfare related trajectories and outcomes that ensued for the welfare service users (WSUs) who took part in repeat longitudinal interviews. However, the numerical mapping and typology offered facilitates only a partial and incomplete view of the ways that highly conditional social security systems impact on peoples’ pathways into, and out of, paid work. As Millar (2007: 537) notes, ‘quantitative data can map out trajectories … qualitative data can provide an understanding of what lies behind these’. Subsequent sections of the chapter, therefore, draw upon in-depth individual longitudinal case studies, presenting analysis that enables a more nuanced understanding of how, and why, welfare conditionality structures diverse work-related outcomes for different people, over time. On one level the individual case histories offered are illustrative of the wider patterns noted in the first part of the chapter. However, the detail and depth of these ‘condensed accounts’ enables a shift from ‘an illustrative case study towards the idea of an exploratory case history … [to] capture the essence of the interplay between agency and ecology, the particular and the general’ (Thomson, 2007: 57, 58).
As such they facilitate a deep and grounded understanding of the efficacy (or otherwise) of welfare conditionality in promoting and sustaining paid employment among social welfare benefit recipients.
Attaching behavioural requirements and conditions to receipt of social welfare benefits in the UK is not a new phenomenon. In 1911 the establishment of unemployment benefit saw claimants required to regularly attend the Labour Exchange office and sign a declaration that they were ‘available for employment’ to maintain their claim (see Harris, 2008, for fuller discussion of historical developments in unemployment benefit policy). In the early 20th century workfare type requirements, including non-negotiable attendance at labour camps by young unemployed men in the 1920s and 1930s, were also periodically introduced, particularly during times of economic crisis (Fletcher, 2014a; Cooper, 2021). However, since the late 1980s successive UK governments have embraced, extended and intensified welfare conditionality to an unprecedented extent with more stringent behavioural requirements applied to greater numbers of people in receipt of welfare benefits and services, and penalties (that is, benefit sanctions) for non-compliance have increased significantly in recent years. Concentrating on key policies initiated since the mid-1990s, discussions in this chapter outline developments in relation to the implementation of welfare conditionality in three substantive areas of the UK welfare state, namely: social security; social housing; and the management of antisocial behaviour (ASB) among groups of citizens variously labelled as problematic or vulnerable in different contexts (Brown, 2017; Brown et al, 2017).
The social security system
As noted in Chapter One, prominent American New Right thinkers were at the forefront of propagating the view that unconditional entitlement-based rights to social benefits were instrumental in establishing and sustaining a welfare dependent ‘underclass’ (see, for example, Murray, 1984; Mead, 1986). Despite the loss of much UK manufacturing industry and significant periods of economic downturn, much government rhetoric since the late 1970s drew on this narrowly defined narrative of welfare dependency to emphasise the individual failings of benefit claimants rather than wider structural changes as a key cause of unemployment (Jordan, 2014). In line with their general antipathy towards the idea of social rights and an extensive welfare state, the Conservative administrations of the Thatcher and Major governments (1979–97) instigated significant reductions in both the numbers able to claim unemployment benefits and the generosity of payments while simultaneously increasing the behavioural requirements attached to continued receipt of unemployment benefits.
Chapters Four and Five considered the efficacy of welfare conditionality in moving recipients off social security benefits into paid work and also in relation to the reduction or cessation of harmful and antisocial behaviour. This chapter sets out and explores other significant key outcomes that ensue when ‘work first’ welfare conditionality is extensively promoted and rigorously implemented. The use of benefit sanctions is central to welfare conditionality within social security regimes and the first part of the chapter offers a commentary on the unprecedented rise (and subsequent fall) in the number of benefit sanctions within the UK (see Chapter Two, the social security system). Discussions in the second part of the chapter then set out the WelCond project’s key findings in relation to the implementation of sanction-backed social security regimes, that is, the universally detrimental impacts on the health, financial and emotional wellbeing of those subject to them. Building on this, the third part offers more detailed discussions of how, and why, compulsion alongside the threat and implementation of benefit sanctions prompt a number of unintended behavioural outcomes as people seek either to retain receipt of their benefits, or alternatively reject the imposition of welfare conditionality.
The great UK sanctioning drive
During the period 2010–16, characterised as the ‘great sanctioning drive’ (Webster, 2016), the routine use of benefit sanctions in the UK significantly increased, reaching an unprecedented peak in 2013 before falling away in subsequent years (see Figure 6.1). Between 2009/ 10 and 2013/ 14 more than one-fifth of all those individuals who claimed Jobseeker’s Allowance (JSA) were sanctioned at some point (that is, 1,833,035 of the 8,232,560 individuals who claimed JSA within the specified five-year period). It is estimated that in the year to 30 September 2014 there were approximately 895,000 JSA and Employment and Support Allowance (ESA) sanctions issued before reconsiderations and appeals (Webster, 2015). The National Audit Office estimated that in 2015 there were around 800,000 referrals for a sanction decision among claimants of Universal Credit (UC), JSA, ESA or Income Support (NAO, 2016).
Powerful actors and institutions have long sought to make people behave in specific ways. Within feudal societies, nascent forms of the state were designed to deliver a monarch’s bidding and the threat or use of violence was a key tool in getting subjects to obey the sovereign power. Although retaining a monopoly on the legitimate use of force remains a defining element of the modern state, today, democratically elected governments tend to look beyond brute force and employ a range of techniques and tools to persuade citizens to act in particular ways (Kelly, 2016). Questions about how to make individual citizens behave more responsibly, particularly in relation to public health and environmental concerns, have become a more prominent concern within public policy in recent decades (Collins et al, 2003: 6 et al, 2010; Spotswood, 2016). As Kelly (2016: 11) describes: ‘Behaviour change is usually about making people different from how they are now.’ Policymakers use a range of tools to variously incentivise, persuade, cajole and compel people to behave in prescribed ways regarded as beneficial for the individual concerned and wider society.
This chapter explores how highly conditional welfare interventions are now regarded as important instruments of behaviour change by many governments. The first part of the chapter offers an initial brief overview of broader economic and psychological theories on behaviour change that have held sway within social science and continue to heavily influence the thinking of contemporary policymakers. In the second part, a consideration of how agency and behaviour have been conceptualised within the welfare conditionality literature and the relevance of different policy tools (that is, sanction, support, sermons and nudges) that policymakers have at their disposal when attempting to change the behaviour of those reliant on social welfare benefits and services is then offered. The third part reviews existing evidence on the effectiveness of welfare conditionality, in either moving those reliant on social welfare benefits into paid work or promoting the cessation of problematic behaviour among sections of the population.
Theorising behaviour change
The literature theorising human behaviour and the various models for understanding and generating behaviour change is vast.