We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Volume I offers a broad perspective on urban culture in the ancient European world. It begins with chronological overviews which paint in broad brushstrokes a picture that serves as a frame for the thematic chapters in the rest of the volume. Positioning ancient Europe within its wider context, it touches on Asia and Africa as regions that informed and were later influenced by urban development in Europe, with particular emphasis on the Mediterranean basin. Topics range from formal characteristics (including public space), water provision, waste disposal, urban maintenance, spaces for the dead, and border spaces; to ways of thinking about, visualising, and remembering cities in antiquity; to conflict within and between cities, economics, mobility and globalisation, intersectional urban experiences, slavery, political participation, and religion.
Functional impairment in daily activities, such as work and socializing, is part of the diagnostic criteria for major depressive disorder and most anxiety disorders. Despite evidence that symptom severity and functional impairment are partially distinct, functional impairment is often overlooked. To assess whether functional impairment captures diagnostically relevant genetic liability beyond that of symptoms, we aimed to estimate the heritability of, and genetic correlations between, key measures of current depression symptoms, anxiety symptoms, and functional impairment.
Methods
In 17,130 individuals with lifetime depression or anxiety from the Genetic Links to Anxiety and Depression (GLAD) Study, we analyzed total scores from the Patient Health Questionnaire-9 (depression symptoms), Generalized Anxiety Disorder-7 (anxiety symptoms), and Work and Social Adjustment Scale (functional impairment). Genome-wide association analyses were performed with REGENIE. Heritability was estimated using GCTA-GREML and genetic correlations with bivariate-GREML.
Results
The phenotypic correlations were moderate across the three measures (Pearson’s r = 0.50–0.69). All three scales were found to be under low but significant genetic influence (single-nucleotide polymorphism-based heritability [h2SNP] = 0.11–0.19) with high genetic correlations between them (rg = 0.79–0.87).
Conclusions
Among individuals with lifetime depression or anxiety from the GLAD Study, the genetic variants that underlie symptom severity largely overlap with those influencing functional impairment. This suggests that self-reported functional impairment, while clinically relevant for diagnosis and treatment outcomes, does not reflect substantial additional genetic liability beyond that captured by symptom-based measures of depression or anxiety.
Lesbian, gay, and bisexual (LGB) individuals are more than twice as likely to experience anxiety and depression compared with heterosexuals. Minority stress theory posits that stigma and discrimination contribute to chronic stress, potentially affecting clinical treatment. We compared psychological therapy outcomes between LGB and heterosexual patients by gender.
Methods
Retrospective cohort data were obtained from seven NHS talking therapy services in London, from April 2013 to December 2023. Of 100,389 patients, 94,239 reported sexual orientation, 7,422 identifying as LGB. The primary outcome was reliable recovery from anxiety and depression. Secondary outcomes were reliable improvement, depression and anxiety severity, therapy attrition, and engagement. Analyses were stratified by gender and employed multilevel regression models, adjusting for sociodemographic and clinical covariates.
Results
After adjustment, gay men had higher odds of reliable recovery (OR: 1.23, 95% CI: 1.13–1.34) and reliable improvement (OR: 1.16, 95% CI: 1.06–1.28) than heterosexual men, with lower attrition (OR: 0.88, 95% CI: 0.80–0.97) and greater reductions in depression (MD: 0.51, 95% CI: 0.28–0.74) and anxiety (MD: 0.45, 95% CI: 0.25–0.65). Bisexual men (OR: 0.67, 95% CI: 0.54–0.83) and bisexual women (OR: 0.84, 95% CI: 0.77–0.93) had lower attrition than heterosexuals. Lesbian and bisexual women, and bisexual men, attended slightly more sessions (MD: 0.02–0.03, 95% CI: 0.01–0.04) than heterosexual patients. No other differences were observed.
Conclusions
Despite significant mental health burdens and stressors, LGB individuals had similar, if not marginally better, outcomes and engagement with psychological therapy compared with heterosexual patients.
The treatment recommendation based on a network meta-analysis (NMA) is usually the single treatment with the highest expected value (EV) on an evaluative function. We explore approaches that recommend multiple treatments and that penalise uncertainty, making them suitable for risk-averse decision-makers. We introduce loss-adjusted EV (LaEV) and compare it to GRADE and three probability-based rankings. We define properties of a valid ranking under uncertainty and other desirable properties of ranking systems. A two-stage process is proposed: the first identifies treatments superior to the reference treatment; the second identifies those that are also within a minimal clinically important difference (MCID) of the best treatment. Decision rules and ranking systems are compared on stylised examples and 10 NMAs used in NICE (National Institute of Health and Care Excellence) guidelines. Only LaEV reliably delivers valid rankings under uncertainty and has all the desirable properties. In 10 NMAs comparing between 5 and 41 treatments, an EV decision maker would recommend 4–14 treatments, and LaEV 0–3 (median 2) fewer. GRADE rules give rise to anomalies, and, like the probability-based rankings, the number of treatments recommended depends on arbitrary probability cutoffs. Among treatments that are superior to the reference, GRADE privileges the more uncertain ones, and in 3/10 cases, GRADE failed to recommend the treatment with the highest EV and LaEV. A two-stage approach based on MCID ensures that EV- and LaEV-based rules recommend a clinically appropriate number of treatments. For a risk-averse decision maker, LaEV is conservative, simple to implement, and has an independent theoretical foundation.
The recent expansion of cross-cultural research in the social sciences has led to increased discourse on methodological issues involved when studying culturally diverse populations. However, discussions have largely overlooked the challenges of construct validity – ensuring instruments are measuring what they are intended to – in diverse cultural contexts, particularly in developmental research. We contend that cross-cultural developmental research poses distinct problems for ensuring high construct validity owing to the nuances of working with children, and that the standard approach of transporting protocols designed and validated in one population to another risks low construct validity. Drawing upon our own and others’ work, we highlight several challenges to construct validity in the field of cross-cultural developmental research, including (1) lack of cultural and contextual knowledge, (2) dissociating developmental and cultural theory and methods, (3) lack of causal frameworks, (4) superficial and short-term partnerships and collaborations, and (5) culturally inappropriate tools and tests. We provide guidelines for addressing these challenges, including (1) using ethnographic and observational approaches, (2) developing evidence-based causal frameworks, (3) conducting community-engaged and collaborative research, and (4) the application of culture-specific refinements and training. We discuss the need to balance methodological consistency with culture-specific refinements to improve construct validity in cross-cultural developmental research.
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.
Mixing describes the process by which solutes evolve from an initial heterogeneous state to uniformity under the stirring action of a fluid flow. Fluid stretching forms thin scalar lamellae that coalesce due to molecular diffusion. Owing to the linearity of the advection–diffusion equation, coalescence can be envisioned as an aggregation process. Here, we demonstrate that in smooth two-dimensional chaotic flows, mixing obeys a correlated aggregation process, where the spatial distribution of the number of lamellae in aggregates is highly correlated with their elongation, and is set by the fractal properties of the advected material lines. We show that the presence of correlations makes mixing less efficient than a completely random aggregation process because lamellae with similar elongations and scalar levels tend to remain isolated from each other. We show that correlated aggregation is uniquely determined by a single exponent that quantifies the effective number of random aggregation events. These findings expand aggregation theories to a larger class of systems, which have relevance to various fundamental and applied mixing problems.
Attention-deficit/hyperactivity disorder (ADHD) is a highly prevalent psychiatric condition that frequently originates in early development and is associated with a variety of functional impairments. Despite a large functional neuroimaging literature on ADHD, our understanding of the neural basis of this disorder remains limited, and existing primary studies on the topic include somewhat divergent results.
Objectives
The present meta-analysis aims to advance our understanding of the neural basis of ADHD by identifying the most statistically robust patterns of abnormal neural activation throughout the whole-brain in individuals diagnosed with ADHD compared to age-matched healthy controls.
Methods
We conducted a meta-analysis of task-based functional magnetic resonance imaging (fMRI) activation studies of ADHD. This included, according to PRISMA guidelines, a comprehensive PubMed search and predetermined inclusion criteria as well as two independent coding teams who evaluated studies and included all task-based, whole-brain, fMRI activation studies that compared participants diagnosed with ADHD to age-matched healthy controls. We then performed multilevel kernel density analysis (MKDA) a well-established, whole-brain, voxelwise approach that quantitatively combines existing primary fMRI studies, with ensemble thresholding (p<0.05-0.0001) and multiple comparisons correction.
Results
Participants diagnosed with ADHD (N=1,550), relative to age-matched healthy controls (N=1,340), exhibited statistically significant (p<0.05-0.0001; FWE-corrected) patterns of abnormal activation in multiple brains of the cerebral cortex and basal ganglia across a variety of cognitive control tasks.
Conclusions
This study advances our understanding of the neural basis of ADHD and may aid in the development of new brain-based clinical interventions as well as diagnostic tools and treatment matching protocols for patients with ADHD. Future studies should also investigate the similarities and differences in neural signatures between ADHD and other highly comorbid psychiatric disorders.
Different fertilization strategies can be adopted to optimize the productive components of an integrated crop–livestock systems. The current research evaluated how the application of P and K to soybean (Glycine max (L.) Merr.) or Urochloa brizantha (Hochst. ex A. Rich.) R. D. Webster cv. BRS Piatã associated with nitrogen or without nitrogen in the pasture phase affects the accumulation and chemical composition of forage and animal productivity. The treatments were distributed in randomized blocks with three replications. Four fertilization strategies were tested: (1) conventional fertilization with P and K in the crop phase (CF–N); (2) conventional fertilization with nitrogen in the pasture phase (CF + N); (3) system fertilization with P and K in the pasture phase (SF–N); (4) system fertilization with nitrogen in the pasture phase (SF + N). System fertilization increased forage accumulation from 15 710 to 20 920 kg DM ha/year compared to conventional without nitrogen. Stocking rate (3.1 vs. 2.8 AU/ha; SEM = 0.12) and gain per area (458 vs. 413 kg BW/ha; SEM = 27.9) were higher in the SF–N than CF–N, although the average daily gain was lower (0.754 vs. 0.792 kg LW/day; SEM = 0.071). N application in the pasture phase, both, conventional and system fertilization resulted in higher crude protein, stocking rate and gain per area. Applying nitrogen and relocate P and K from crop to pasture phase increase animal productivity and improve forage chemical composition in integrated crop–livestock system.
To test the hypothesis that exposure to peer self-harm induces adolescents’ urges to self-harm and that this is influenced by individual suggestibility.
Methods:
We recruited 97 UK-based adults aged 18–25 years with a recent history of self-harm, measuring baseline suggestibility (Resistance to Peer Influence; RPI) and perceived ability to control urges to self-harm (using an adapted item from the Self-Efficacy to Resist Suicidal Action scale; SEASA) before and after two self-harm vignettes featuring named peers from the participant’s social network (to simulate exposure to peer non-suicidal self-harm) and after a wash-out exposure. We used paired t-tests to compare mean SEASA scores pre- and post-exposure, and linear regression to test for an association between RPI and change in SEASA scores pre- and post-exposure.
Results:
Perceived ability to control urges to self-harm was significantly reduced following exposure to peer self-harm (t(96) = 4.02, p < 0.001, mean difference = 0.61; 95% CI = 0.31, 0.91), but was not significantly different from baseline after exposure to a wash-out. We found no association between suggestibility and change in urges to self-harm after exposure to peer self-harm.
Conclusion:
Our findings support social influences on self-harm in a sample of young adults, regardless of their individual degree of suggestibility.
Area-based conservation is a widely used approach for maintaining biodiversity, and there are ongoing discussions over what is an appropriate global conservation area coverage target. To inform such debates, it is necessary to know the extent and ecological representativeness of the current conservation area network, but this is hampered by gaps in existing global datasets. In particular, although data on privately and community-governed protected areas and other effective area-based conservation measures are often available at the national level, it can take many years to incorporate these into official datasets. This suggests a complementary approach is needed based on selecting a sample of countries and using their national-scale datasets to produce more accurate metrics. However, every country added to the sample increases the costs of data collection, collation and analysis. To address this, here we present a data collection framework underpinned by a spatial prioritization algorithm, which identifies a minimum set of countries that are also representative of 10 factors that influence conservation area establishment and biodiversity patterns. We then illustrate this approach by identifying a representative set of sampling units that cover 10% of the terrestrial realm, which included areas in only 25 countries. In contrast, selecting 10% of the terrestrial realm at random included areas across a mean of 162 countries. These sampling units could be the focus of future data collation on different types of conservation area. Analysing these data could produce more rapid and accurate estimates of global conservation area coverage and ecological representativeness, complementing existing international reporting systems.
In this work, we present a methodology and a corresponding code-base for constructing mock integral field spectrograph (IFS) observations of simulated galaxies in a consistent and reproducible way. Such methods are necessary to improve the collaboration and comparison of observation and theory results, and accelerate our understanding of how the kinematics of galaxies evolve over time. This code, SimSpin, is an open-source package written in R, but also with an API interface such that the code can be interacted with in any coding language. Documentation and individual examples can be found at the open-source website connected to the online repository. SimSpin is already being utilised by international IFS collaborations, including SAMI and MAGPI, for generating comparable data sets from a diverse suite of cosmological hydrodynamical simulations.
Delirium is characterised by an acute, fluctuating change in cognition, attention and awareness (Wilson et al. Nature Reviews 2020; 6). This presentation can make the diagnosis of delirium extremely challenging to clinicians (Gofton., Canadian Journal of neurological sciences. 2011; 38 673-680). It is commonly reported in hospitalised patients, particularly in those over the age of sixty five (NICE. Delirium: prevention, diagnosis and management. 2010).
Objectives
Our aim is to identify which investigations and cognitive assessments are completed prior to a referral to the liaison psychiatry services in patients with symptoms of delirium.
Methods
Referrals (N = 6012) to the liaison psychiatry team at Croydon University Hospital made between April and September 2022 were screened. Search parameters used to identify referrals related to a potential diagnosis of delirium were selected by the authors. The terms used were confusion; delirium; agitation; aggression; cognitive decline or impairment; disorientation; challenging behaviour. Data was collected on the completion rates of investigations for delirium as advised by the NICE clinical knowledge summaries. Further data was gathered on neuroimaging (CT or MRI), cognitive assessment tools (MOCA/MMSE) and delirium screening tools (4AT/AMTS).
Results
The study sample identified 114 referrals (61 males and 53 females), with 82% over 65 years at the time of referral. In 96% of referrals, U&E and CRP were performed. Sputum culture (1%), urine toxin screen (4%) and free T3/4 (8%) were the tests utilised the least. Neuroimaging was completed in 41% of referrals (see Graph 1 for a full breakdown of results).
A formal cognitive assessment or delirium screening tool was completed in 32% of referrals. The AMTS and 4AT tools were documented for 65% and 24% respectively. A total of 19 referrals explicitly stated the patient was suspected to have dementia. A delirium screening tool was documented in 47% of these cases however, a formal cognitive assessment was documented in only 5% of these patients.
Following psychiatric assessment 47% of referrals were confirmed as delirium.
Image:
Conclusions
Our data highlights the low level completion of the NICE recommended delirium screen prior to referral to liaison psychiatry. The effective implementation of a delirium screen and cognitive assessment is paramount to reduce the number of inappropriate psychiatric referrals in hospital and helps to identify reversible organic causes of delirium. This in turn will ensure timely treatment of reversible causes of delirium and reduce the length of hospital admission.
Transitions into an assisted living home (ALH) are difficult and may impact the well-being of older adults. A thematic analysis guided by grounded theory was employed to better understand how a transition into an ALH influenced older adults’ overall well-being. Individual, face-to-face interviews were conducted with a convenience sample of 14 participants at an ALH in the rural, southeastern U.S. Two central findings that influenced well-being during the transition process were revealed: loss of independence (sub-themes include loss of physical and mental health and loss of driving) and downsizing in space and possessions. The themes support and broaden the Hierarchical Leisure Constraints Theory, a Modified Constraints to Wellbeing model is proposed, and implications for older adult health care practitioners in ALHs are recommended. Further research is needed on the Modified Constraints to Wellbeing model and how to better describe these constraints to older adults’ well-being when relocating into ALHs.
When assessing the evolution of the early Roman Republic, scholars typically designate a break between the fifth/fourth centuries and the end of the fourth century BCE/beginning of the third, based on political, legal, and military milestones. Archaeologists detect a similar break, as members of the new nobilitas turned to architecture as a vehicle for self-representation. Where most scholarship characterizes buildings and the broader cityscape as a reflection of political change, this chapter deploys theories of object agency and object-scapes to argue for their agency in effecting such change. Questioning whether Romans were conscious, at the time, of a new era dawning, I suggest that circumstantial evidence supports a hypothesis that, at least in the later Republic, they were.
Lumateperone (LUMA) is an FDA-approved antipsychotic to treat schizophrenia and depressive episodes associated with bipolar I or bipolar II disorder. An open-label study (Study 303) evaluated the safety and tolerability of LUMA in outpatients with stable schizophrenia who switched from previous antipsychotic (AP) treatment. This post hoc analysis of Study 303 investigated the safety and tolerability of LUMA stratified by previous AP in patients who switched to LUMA treatment for 6 weeks.
Methods
Adult outpatients (≥18 years) with stable schizophrenia were switched from previous AP to LUMA 42 mg once daily for 6 weeks followed by switching to another approved AP for 2 weeks follow-up. Post hoc analyses were stratified by most common previous AP: risperidone or paliperidone (RIS/PAL); quetiapine (QET); aripiprazole or brexpiprazole (ARI/BRE); olanzapine (OLA). Safety analyses included adverse events (AE), vital signs, and laboratory tests. Efficacy was assessed using the Positive and Negative Syndrome Scale (PANSS) and the Clinical Global Impressions-Severity (CGI-S) scale.
Results
The safety population comprised 301 patients, of which 235 (78.1%) were previously treated with RIS/PAL (n=95), QET (n=60), ARI/BRE (n=43), or OLA (n=37). Rates of treatment-emergent AEs (TEAEs) while on LUMA were similar between previous AP groups (44.2%-55.8%). TEAEs with incidences of ≥5% in any AP group were dry mouth, somnolence, sedation, headache, diarrhea, cough, and insomnia. Most TEAEs were mild or moderate in severity for all groups. Rates of serious TEAEs were low and similar between groups (0%–7.0%).
Statistically significant (P<.05) decreases from baseline were observed in the OLA group that switched to LUMA in total cholesterol and low-density lipoprotein cholesterol with significant decreases thereafter on LUMA. Statistically significant decreases in prolactin levels were observed in both the RIS/PAL (P<.0001) and OLA (P<.05) groups. Patients switched from RIS/PAL to LUMA showed significant (P<.05) decreases for body mass index, waist circumference, and weight. At follow-up, 2 weeks after patients switched back from LUMA to another AP, none of the decreases in laboratory parameters or body morphology observed while on LUMA maintained significance.
Those switching from QET had significant improvements from baseline at Day 42 in PANSS Total score (mean change from baseline −3.47; 95% confidence interval [CI] −5.27, −1.68; P<.001) and CGI-S Total score (mean change from baseline −0.24; 95% CI, −0.38, −0.10; P<.01).
Conclusion
In outpatients with stable schizophrenia, LUMA 42 mg treatment was well tolerated in patients switching from a variety of previous APs. Patients switching from RIS/PAL or OLA to LUMA had significant improvements in cardiometabolic and prolactin parameters. These data further support the favorable safety, tolerability, and efficacy of LUMA in patients with schizophrenia.
There is substantial variation in patient symptoms following psychological therapy for depression and anxiety. However, reliance on endpoint outcomes ignores additional interindividual variation during therapy. Knowing a patient's likely symptom trajectories could guide clinical decisions. We aimed to identify latent classes of patients with similar symptom trajectories over the course of psychological therapy and explore associations between baseline variables and trajectory class.
Methods
Patients received high-intensity psychological treatment for common mental health problems at National Health Service Improving Access to Psychological Therapies services in South London (N = 16 258). To identify trajectories, we performed growth mixture modelling of depression and anxiety symptoms over 11 sessions. We then ran multinomial regressions to identify baseline variables associated with trajectory class membership.
Results
Trajectories of depression and anxiety symptoms were highly similar and best modelled by four classes. Three classes started with moderate-severe symptoms and showed (1) no change, (2) gradual improvement, and (3) fast improvement. A final class (4) showed initially mild symptoms and minimal improvement. Within the moderate-severe baseline symptom classes, patients in the two showing improvement as opposed to no change tended not to be prescribed psychotropic medication or report a disability and were in employment. Patients showing fast improvement additionally reported lower baseline functional impairment on average.
Conclusions
Multiple trajectory classes of depression and anxiety symptoms were associated with baseline characteristics. Identifying the most likely trajectory for a patient at the start of treatment could inform decisions about the suitability and continuation of therapy, ultimately improving patient outcomes.
While studies from the start of the COVID-19 pandemic have described initial negative effects on mental health and exacerbating mental health inequalities, longer-term studies are only now emerging.
Method
In total, 34 465 individuals in the UK completed online questionnaires and were re-contacted over the first 12 months of the pandemic. We used growth mixture modelling to identify trajectories of depression, anxiety and anhedonia symptoms using the 12-month data. We identified sociodemographic predictors of trajectory class membership using multinomial regression models.
Results
Most participants had consistently low symptoms of depression or anxiety over the year of assessments (60%, 69% respectively), and a minority had consistently high symptoms (10%, 15%). We also identified participants who appeared to show improvements in symptoms as the pandemic progressed, and others who showed the opposite pattern, marked symptom worsening, until the second national lockdown. Unexpectedly, most participants showed stable low positive affect, indicating anhedonia, throughout the 12-month period. From regression analyses, younger age, reporting a previous mental health diagnosis, non-binary, or self-defined gender, and an unemployed or a student status were significantly associated with membership of the stable high symptom groups for depression and anxiety.
Conclusions
While most participants showed little change in their depression and anxiety symptoms across the first year of the pandemic, we highlight the divergent responses of subgroups of participants, who fared both better and worse around national lockdowns. We confirm that previously identified predictors of negative outcomes in the first months of the pandemic also predict negative outcomes over a 12-month period.