We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Water is often referred to as our most precious resource, and for a good reason – drinking water and wastewater services sustain core functions of the critical infrastructure, communities, and human life itself. Our water systems are threatened by aging infrastructure, floods, drought, storms, earthquakes, sea level rise, population growth, cyber-security breaches, and pollution, often in combination. Marginalized communities inevitably feel the worst impacts, and our response continues to be hampered by fragmented and antiquated governance and management practices. This paper focuses on the resilience of water sector (drinking water, wastewater, and stormwater [DWS]) to three major hazards (Sea-Level Rise, Earthquake, and Cyberattack). The purpose of this paper is to provide information useful for creating and maintaining resilient water system services. The term resilience describes the ability to adapt to changing conditions and to withstand and recover from disruptions. The resilience of DWS systems is of utmost importance to modern societies that are highly dependent on continued access to these water sector services. This review covers the terminology on water sector resilience and the assessment of a broad landscape of threats mapped with the proposed framework. A more detailed discussion on two areas of resilience is given: Physical Resilience, which is currently a major factor influencing disruptions and failures in DWS systems, and Digital Resilience, which is a rapidly increasing concern for modern infrastructure systems. The resilience of DWS systems should be considered holistically, inclusive of social, digital, and physical systems. The framework integrates various perspectives on water system threats by showcasing interactions between the parts of the DWS systems and their environment. While the challenges of change, shock and stresses are inevitable, embracing a social–ecological–technical system-of-systems and whole-life approach will allow us to better understand and operationalize resilience.
With increasing recognition of the prevalence and impact of perinatal mental health (PMH) disorders comes a responsibility to ensure that tomorrow's doctors can support families during the perinatal period. Online surveys seeking information about the inclusion of PMH education in undergraduate curricula were sent to psychiatry curriculum leads and student psychiatry societies from each university medical school in the UK between April and September 2021.
Results
Responses were received from 32/35 (91.4%) medical schools. Two-thirds reported specific inclusion of PMH content in the core curriculum, typically integrated into general adult psychiatry or obstetric teaching. Students at the remaining schools were all likely to be examined on the topic or see perinatal cases during at least one clinical attachment.
Clinical implications
PMH education offers an opportunity for collaboration between psychiatry and other disciplines. Future work looking at educational case examples with objective outcomes would be valuable.
Describe nutrition and physical activity practices, nutrition self-efficacy and barriers and food programme knowledge within Family Child Care Homes (FCCH) and differences by staffing.
Design:
Baseline, cross-sectional analyses of the Happy Healthy Homes randomised trial (NCT03560050).
Setting:
FCCH in Oklahoma, USA.
Participants:
FCCH providers (n 49, 100 % women, 30·6 % Non-Hispanic Black, 2·0 % Hispanic, 4·1 % American Indian/Alaska Native, 51·0 % Non-Hispanic white, 44·2 ± 14·2 years of age. 53·1 % had additional staff) self-reported nutrition and physical activity practices and policies, nutrition self-efficacy and barriers and food programme knowledge. Differences between providers with and without additional staff were adjusted for multiple comparisons (P < 0·01).
Results:
The prevalence of meeting all nutrition and physical activity best practices ranged from 0·0–43·8 % to 4·1–16·7 %, respectively. Average nutrition and physical activity scores were 3·2 ± 0·3 and 3·0 ± 0·5 (max 4·0), respectively. Sum nutrition and physical activity scores were 137·5 ± 12·6 (max 172·0) and 48·4 ± 7·5 (max 64·0), respectively. Providers reported high nutrition self-efficacy and few barriers. The majority of providers (73·9–84·7 %) felt that they could meet food programme best practices; however, knowledge of food programme best practices was lower than anticipated (median 63–67 % accuracy). More providers with additional staff had higher self-efficacy in family-style meal service than did those who did not (P = 0·006).
Conclusions:
Providers had high self-efficacy in meeting nutrition best practices and reported few barriers. While providers were successfully meeting some individual best practices, few met all. Few differences were observed between FCCH providers with and without additional staff. FCCH providers need additional nutrition training on implementation of best practices.
Seed retention, and ultimately seed shatter, are extremely important for the efficacy of harvest weed seed control (HWSC) and are likely influenced by various agroecological and environmental factors. Field studies investigated seed-shattering phenology of 22 weed species across three soybean [Glycine max (L.) Merr.]-producing regions in the United States. We further evaluated the potential drivers of seed shatter in terms of weather conditions, growing degree days, and plant biomass. Based on the results, weather conditions had no consistent impact on weed seed shatter. However, there was a positive correlation between individual weed plant biomass and delayed weed seed–shattering rates during harvest. This work demonstrates that HWSC can potentially reduce weed seedbank inputs of plants that have escaped early-season management practices and retained seed through harvest. However, smaller individuals of plants within the same population that shatter seed before harvest pose a risk of escaping early-season management and HWSC.
Susceptibility to infection such as SARS-CoV-2 may be influenced by host genotype. TwinsUK volunteers (n = 3261) completing the C-19 COVID-19 symptom tracker app allowed classical twin studies of COVID-19 symptoms, including predicted COVID-19, a symptom-based algorithm to predict true infection, derived from app users tested for SARS-CoV-2. We found heritability of 49% (32−64%) for delirium; 34% (20−47%) for diarrhea; 31% (8−52%) for fatigue; 19% (0−38%) for anosmia; 46% (31−60%) for skipped meals and 31% (11−48%) for predicted COVID-19. Heritability estimates were not affected by cohabiting or by social deprivation. The results suggest the importance of host genetics in the risk of clinical manifestations of COVID-19 and provide grounds for planning genome-wide association studies to establish specific genes involved in viral infectivity and the host immune response.
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed-shatter phenology in 13 economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after physiological maturity at multiple sites spread across 14 states in the southern, northern, and mid-Atlantic United States. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus spp. seed shatter was low (0% to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2% to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than 10% of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after maturity at multiple sites spread across 11 states in the southern, northern, and mid-Atlantic United States. From soybean maturity to 4 wk after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased moving north through the states. At soybean maturity, the percent of seed shatter ranged from 1% to 70%. That range had shifted to 5% to 100% (mean: 42%) by 25 d after soybean maturity. There were considerable differences in seed-shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output during certain years.
To disrupt cycles of health inequity, traceable to dietary inequities in the earliest stages of life, public health interventions should target improving nutritional wellbeing in preconception/pregnancy environments. This requires a deep engagement with pregnant/postpartum people (PPP) and their communities (including their health and social care providers, HSCP). We sought to understand the factors that influence diet during pregnancy from the perspectives of PPP and HSCP, and to outline intervention priorities.
Design:
We carried out thematic network analyses of transcripts from ten focus group discussions (FGD) and one stakeholder engagement meeting with PPP and HSCP in a Canadian city. Identified themes were developed into conceptual maps, highlighting local priorities for pregnancy nutrition and intervention development.
Setting:
FGD and the stakeholder meeting were run in predominantly lower socioeconomic position (SEP) neighbourhoods in the sociodemographically diverse city of Hamilton, Canada.
Participants:
All local, comprising twenty-two lower SEP PPP and forty-three HSCP.
Results:
Salient themes were resilience, resources, relationships and the embodied experience of pregnancy. Both PPP and HSCP underscored that socioeconomic-political forces operating at multiple levels largely determined the availability of individual and relational resources constraining diet during pregnancy. Intervention proposals focused on cultivating individual and community resilience to improve early-life nutritional environments. Participants called for better-integrated services, greater income supports and strengthened support programmes.
Conclusions:
Hamilton stakeholders foregrounded social determinants of inequity as main factors influencing pregnancy diet. They further indicated a need to develop interventions that build resilience and redistribute resources at multiple levels, from the household to the state.
The aim of this study was to describe the sensitivity of various C-reactive protein (CRP) cut-off values to identify patients requiring magnetic resonance imaging evaluation for pyogenic spinal infection among emergency department (ED) adults presenting with neck or back pain.
Methods
We prospectively enrolled a convenience series of adults presenting to a community ED with neck or back pain in whom ED providers had concern for pyogenic spinal infection in a derivation cohort from 2004 to 2010 and a validation cohort from 2010 to 2018. The validation cohort included only patients with pyogenic spinal infection. We analysed diagnostic test characteristics of various CRP cut-off values.
Results
We enrolled 232 patients and analysed 201 patients. The median age was 55 years, 43.8% were male, 4.0% had history of intravenous drug use, and 20.9% had recent spinal surgery. In the derivation cohort, 38 (23.9%) of 159 patients had pyogenic spinal infection. Derivation sensitivity and specificity of CRP cut-off values were > 3.5 mg/L (100%, 24.8%), > 10 mg/L (100%, 41.3%), > 30 mg/L (100%, 61.2%), and > 50 mg/L (89.5%, 69.4%). Validation sensitivities of CRP cut-off values were > 3.5 mg/L (97.6%), > 10 mg/L (97.6%), > 30 mg/L (90.4%), and > 50 mg/L (85.7%).
Conclusions
CRP cut-offs beyond the upper limit of normal had high sensitivity for pyogenic spinal infection in this adult ED population. Elevated CRP cut-off values of 10 mg/L and 30 mg/L require validation in other settings.
Centralized ratings by telephone have proven feasible for assessment of psychiatric diagnosis, symptom severity, and suicidality, and may be used for safety assessments in non-psychiatric trials with sites that do not employ staff experienced in psychiatric assessment.
Objective
To assess whether centralizing assessments with mental health experts enables immediate clinical follow-up and actionable diagnostic support for investigators.
Aims
To examine the feasibility of centralized ratings in a Phase III dermatology clinical trial for safety assessments.
Methods
1127 subjects enrolled in a trial of medication for their dermatologic condition were assessed via telephone by central raters who administered the SCID-CT, C-SSRS and PHQ-8 at screening. At monthly visits, central raters performed the C-SSRS, PHQ-8, GAD-7 and items designed to detect emergent psychotic symptoms.
Results:
Screening
34 subjects were excluded on the basis of SCID-CT diagnosis. Based on diagnosis or severity, subjects were classified as being in no need of mental health services, having mild psychiatric symptoms (referred to local mental health service provider; n=33), moderate (immediate referral for psychiatric evaluation; n=17), or severe (immediate escort to emergency room; n=0).
One subject reported suicidal ideation on the C-SSRS, 10 reported self-injurious behavior, and 5 reported suicidal behavior in the last year.
Follow-Up
No subjects reported suicidal ideation or behavior at any of the 6861 follow-up assessments. One subject reported self-injurious behavior and two reported emergent psychotic symptoms.
Conclusions
This study established the feasibility and acceptability of routine screening and monitoring of psychopathology and suicidality by central raters in a non-psychiatric population.
To evaluate long-term efficacy of deutetrabenazine in patients with tardive dyskinesia (TD) by examining response rates from baseline in Abnormal Involuntary Movement Scale (AIMS) scores. Preliminary results of the responder analysis are reported in this analysis.
Background
In the 12-week ARM-TD and AIM-TD studies, the odds of response to deutetrabenazine treatment were higher than the odds of response to placebo at all response levels, and there were low rates of overall adverse events and discontinuations associated with deutetrabenazine.
Method
Patients with TD who completed ARM-TD or AIM-TD were included in this open-label, single-arm extension study, in which all patients restarted/started deutetrabenazine 12mg/day, titrating up to a maximum total daily dose of 48mg/day based on dyskinesia control and tolerability. The study comprised a 6-week titration and a long-term maintenance phase. The cumulative proportion of AIMS responders from baseline was assessed. Response was defined as a percent improvement from baseline for each patient from 10% to 90% in 10% increments. AlMS score was assessed by local site ratings for this analysis.
Results
343 patients enrolled in the extension study (111 patients received placebo in the parent study and 232 patients received deutetrabenazine). At Week 54 (n=145; total daily dose [mean±standard error]: 38.1±0.9mg), 63% of patients receiving deutetrabenazine achieved ≥30% response, 48% of patients achieved ≥50% response, and 26% achieved ≥70% response. At Week 80 (n=66; total daily dose: 38.6±1.1mg), 76% of patients achieved ≥30% response, 59% of patients achieved ≥50% response, and 36% achieved ≥70% response. Treatment was generally well tolerated.
Conclusions
Patients who received long-term treatment with deutetrabenazine achieved response rates higher than those observed in positive short-term studies, indicating clinically meaningful long-term treatment benefit.
Presented at: American Academy of Neurology Annual Meeting; April 21–27, 2018, Los Angeles, California, USA.
Funding Acknowledgements: This study was supported by Teva Pharmaceuticals, Petach Tikva, Israel.
To evaluate the long-term safety and tolerability of deutetrabenazine in patients with tardive dyskinesia (TD) at 2years.
Background
In the 12-week ARM-TD and AIM-TD studies, deutetrabenazine showed clinically significant improvements in Abnormal Involuntary Movement Scale scores compared with placebo, and there were low rates of overall adverse events (AEs) and discontinuations associated with deutetrabenazine.
Method
Patients who completed ARM-TD or AIM-TD were included in this open-label, single-arm extension study, in which all patients restarted/started deutetrabenazine 12mg/day, titrating up to a maximum total daily dose of 48mg/day based on dyskinesia control and tolerability. The study comprised a 6-week titration period and a long-term maintenance phase. Safety measures included incidence of AEs, serious AEs (SAEs), and AEs leading to withdrawal, dose reduction, or dose suspension. Exposure-adjusted incidence rates (EAIRs; incidence/patient-years) were used to compare AE frequencies for long-term treatment with those for short-term treatment (ARM-TD and AIM-TD). This analysis reports results up to 2 years (Week106).
Results
343 patients were enrolled (111 patients received placebo in the parent study and 232 received deutetrabenazine). There were 331.4 patient-years of exposure in this analysis. Through Week 106, EAIRs of AEs were comparable to or lower than those observed with short-term deutetrabenazine and placebo, including AEs of interest (akathisia/restlessness [long-term EAIR: 0.02; short-term EAIR range: 0–0.25], anxiety [0.09; 0.13–0.21], depression [0.09; 0.04–0.13], diarrhea [0.06; 0.06–0.34], parkinsonism [0.01; 0–0.08], somnolence/sedation [0.09; 0.06–0.81], and suicidality [0.02; 0–0.13]). The frequency of SAEs (EAIR 0.15) was similar to those observed with short-term placebo (0.33) and deutetrabenazine (range 0.06–0.33) treatment. AEs leading to withdrawal (0.08), dose reduction (0.17), and dose suspension (0.06) were uncommon.
Conclusions
These results confirm the safety outcomes seen in the ARM-TD and AIM-TD parent studies, demonstrating that deutetrabenazine is well tolerated for long-term use in TD patients.
Presented at: American Academy of Neurology Annual Meeting; April 21–27, 2018, Los Angeles, California,USA
Funding Acknowledgements: Funding: This study was supported by Teva Pharmaceuticals, Petach Tikva, Israel
One of the first things that one discovers, surveying the many hundreds of languages used throughout the world, is that many of them (perhaps a third or more of the world’s 7000 living languages) have no written form. And if one were to travel back in time to an earlier age, the proportion of languages having a written form would be far less. What this means for us is that written language is both secondary to spoken language and derivative of it. So, one might accurately state that all languages are spoken languages, but only some languages (albeit many of them) are also written languages. Thus, while this book will explore the nature of writing systems (in chapter 9) and the significant role that these indeed do play in language conflicts, it is essential that we first examine the properties of spoken language.
In this chapter, the reader will find discussion of cases involving dialect minorities. The two featured cases are Okinawan speakers in Japan and African-American English (AAE) speakers in the US. Each case presents the story of a groups’ speaking the “wrong” (i.e. stigmatized) variety of a language, and being punished (economically, socially, and politically) for doing so. The former case involves speakers of a language that is not Japanese (i.e. Ryūkyūan) being presumed to speak a (stigmatized) dialect of Japanese and being made to suffer for it. In the second case, we find the language variety of English spoken by African-Americans to be especially stigmatized on account of a generally negative disposition toward the minority group itself, rather than on account of any objective features of their dialect. In the end of chapter section on extra cases for further exploration, the reader will find synopses on: Occitan in France, Singaporean English and local Chinese dialects in Singapore, and Landsmål/Bokmål in Norway
This chapter presents cases in which language conflict and language rights issues have arisen in the aftermath of the creation of a geopolitical minority as a consequence of changed national boundaries. Some of these changes are the outcome of war, some result from political unification, and others stem from political dissolution. Each case, though, involves a linguistic group finding itself a minority in a country dominated by another linguistic group, without having moved anywhere. The cases featured in this chapter are Hungarians in Slovakia, Hispanics in Southwest US, and Kurds in Turkey. Three extra cases presented at the end of the chapter for the reader to explore are the Tetum in Timor Leste, the Amazigh (Berbers) in the Maghreb region of Africa, and the Tibetans in China.
Sapir recognized the role of language in determining social and cultural identity. asserting that “common speech serves as a peculiarly potent symbol of the social identity of those who speak the language”. It is well-established that language is a factor in social solidarity—traditionally one identifies most closely with those who speak the same language and further the same variety of that language. The idea that language and culture go hand-in-hand is thus intuitively appealing. In fact, so appealing is this idea, that many people take language as a proxy for culture, and what are presented as language conflicts are often in fact socio-cultural conflicts, as we will see in later chapters. This chapter examines the relationship between language and culture, asking whether language is simply emblematic of culture or whether there is a deeper, causal relationship.