We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter seeks to offer an overview of the legal powers at the disposal of the British government to secure its domestic objectives during the First World War.1 It argues broadly that a flexible and frequently changing legal framework for wartime domestic policy, notwithstanding instances of legal bluffing and occasional repressive action by the executive against industrial militants and the critical press, reflected sensitivity on the part of the authorities to the stresses and tensions caused by wartime legal restrictions and shortages. The most important of these legal powers were those granted under a succession of Defence of the Realm Acts (DORA) enacted throughout the war. These acts authorised the promulgation by the Privy Council of hundreds of Defence Regulations covering most facets of economic and social life on the home front. Those regulations, in turn, devolved legal powers to more subordinate governmental levels, normally ministers of the crown, where ‘notices’, ‘orders’, and ‘rules’, whether of local or of more general application, industry-wide or narrowly focused, would be issued. It may also be noted that wartime delegated or sub-delegated powers could be granted by pre-August 1914 legislation, and could be conferred on statutory bodies, not just on ministers. For example, the Scottish Insurance Commissioners were granted powers under the National Insurance Act 1911 to make regulations during the war whereby individuals who represented insured persons on insurance committees, and who were absent on war service for more than six months without leave of the committee, would not be deemed to have ceased to be members of the insurance committee.
Evidence suggests a link between smaller hippocampal volume (HV) and post-traumatic stress disorder (PTSD). However, there has been little prospective research testing this question directly and it remains unclear whether smaller HV confers risk or is a consequence of traumatization and PTSD.
Methods
U.S. soldiers (N = 107) completed a battery of clinical assessments, including structural magnetic resonance imaging pre-deployment. Once deployed they completed monthly assessments of traumatic-stressors and symptoms. We hypothesized that smaller HV would potentiate the effects of traumatic stressors on PTSD symptoms in theater. Analyses evaluated whether total HV, lateral (right v. left) HV, or HV asymmetry (right – left) moderated the effects of stressor-exposure during deployment on PTSD symptoms.
Results
Findings revealed no interaction between total HV and average monthly traumatic-stressors on PTSD symptoms b = −0.028, p = 0.681 [95% confidence interval (CI) −0.167 to 0.100]. However, in the context of greater exposure to average monthly traumatic stressors, greater right HV was associated with fewer PTSD symptoms b = −0.467, p = 0.023 (95% CI −0.786 to −0.013), whereas greater left HV was unexpectedly associated with greater PTSD symptoms b = 0.435, p = 0.024 (95% CI 0.028–0.715).
Conclusions
Our findings highlight the importance of considering the complex role of HV, in particular HV asymmetry, in predicting the emergence of PTSD symptoms in response to war-zone trauma.
This chapter will examine physical health through the example of women’s sexual embodiment after cancer. Women’s sexual embodiment can become disrupted after cancer due to a range of physical changes, body dissatisfaction, and psychological distress, leading to diminished sexual well-being. Life stage, couple relationship status and quality, cultural background, and sexual identity can shape women’s experience of sexual change. Existing studies have predominantly reflected Western cultural discourses that privilege a biomedical model of sexual dysfunction, heterosexual relationship dynamics, and the value placed on body appearance for feminine identity. Such discourses inform "abnormal," "unfeminine," and "unsexual" meanings that women ascribe to their bodies after cancer. However, women also report discursive and practical strategies they use to positively renegotiate embodied change. Health professionals can support women with cancer by giving permission for the discussion of sexual matters, providing information, and acknowledging and normalizing sexual change.
The following position statement from the Union of the European Phoniatricians, updated on 25th May 2020 (superseding the previous statement issued on 21st April 2020), contains a series of recommendations for phoniatricians and ENT surgeons who provide and/or run voice, swallowing, speech and language, or paediatric audiology services.
Objectives
This material specifically aims to inform clinical practices in countries where clinics and operating theatres are reopening for elective work. It endeavours to present a current European view in relation to common procedures, many of which fall under the aegis of aerosol generating procedures.
Conclusion
As evidence continues to build, some of the recommended practices will undoubtedly evolve, but it is hoped that the updated position statement will offer clinicians precepts on safe clinical practice.
There is significant interest in the use of angiotensin converting enzyme inhibitors (ACE-I) and angiotensin II receptor blockers (ARB) in coronavirus disease 2019 (COVID-19) and concern over potential adverse effects since these medications upregulate the severe acute respiratory syndrome coronavirus 2 host cell entry receptor ACE2. Recent studies on ACE-I and ARB in COVID-19 were limited by excluding outpatients, excluding patients by age, analyzing ACE-I and ARB together, imputing missing data, and/or diagnosing COVID-19 by chest computed tomography without definitive reverse transcription polymerase chain reaction (RT-PCR), all of which are addressed here.
Methods:
We performed a retrospective cohort study of 1023 COVID-19 patients diagnosed by RT-PCR at Stanford Hospital through April 8, 2020 with a minimum follow-up time of 14 days to investigate the association between ACE-I or ARB use with outcomes.
Results:
Use of ACE-I or ARB medications was not associated with increased risk of hospitalization, intensive care unit admission, or death. Compared to patients with charted past medical history, there was a lower risk of hospitalization for patients on ACE-I (odds ratio (OR) 0.43; 95% confidence interval (CI) 0.19–0.97; P = 0.0426) and ARB (OR 0.39; 95% CI 0.17–0.90; P = 0.0270). Compared to patients with hypertension not on ACE-I or ARB, patients on ARB medications had a lower risk of hospitalization (OR 0.09; 95% CI 0.01–0.88; P = 0.0381).
Conclusions:
These findings suggest that the use of ACE-I and ARB is not associated with adverse outcomes and may be associated with improved outcomes in COVID-19, which is immediately relevant to care of the many patients on these medications.
Given the rapidly progressing coronavirus disease 2019 (COVID-19) pandemic, this report on a US cohort of 54 COVID-19 patients from Stanford Hospital and data regarding risk factors for severe disease obtained at initial clinical presentation is highly important and immediately clinically relevant. We identified low presenting oxygen saturation as predictive of severe disease outcomes, such as diagnosis of pneumonia, acute respiratory distress syndrome, and admission to the intensive care unit, and also replicated data from China suggesting an association between hypertension and disease severity. Clinicians will benefit by tools to rapidly risk stratify patients at presentation by likelihood of progression to severe disease.
To evaluate the National Health Safety Network (NHSN) hospital-onset Clostridioides difficile infection (HO-CDI) standardized infection ratio (SIR) risk adjustment for general acute-care hospitals with large numbers of intensive care unit (ICU), oncology unit, and hematopoietic cell transplant (HCT) patients.
Design:
Retrospective cohort study.
Setting:
Eight tertiary-care referral general hospitals in California.
Methods:
We used FY 2016 data and the published 2015 rebaseline NHSN HO-CDI SIR. We compared facility-wide inpatient HO-CDI events and SIRs, with and without ICU data, oncology and/or HCT unit data, and ICU bed adjustment.
Results:
For these hospitals, the median unmodified HO-CDI SIR was 1.24 (interquartile range [IQR], 1.15–1.34); 7 hospitals qualified for the highest ICU bed adjustment; 1 hospital received the second highest ICU bed adjustment; and all had oncology-HCT units with no additional adjustment per the NHSN. Removal of ICU data and the ICU bed adjustment decreased HO-CDI events (median, −25%; IQR, −20% to −29%) but increased the SIR at all hospitals (median, 104%; IQR, 90%–105%). Removal of oncology-HCT unit data decreased HO-CDI events (median, −15%; IQR, −14% to −21%) and decreased the SIR at all hospitals (median, −8%; IQR, −4% to −11%).
Conclusions:
For tertiary-care referral hospitals with specialized ICUs and a large number of ICU beds, the ICU bed adjustor functions as a global adjustment in the SIR calculation, accounting for the increased complexity of patients in ICUs and non-ICUs at these facilities. However, the SIR decrease with removal of oncology and HCT unit data, even with the ICU bed adjustment, suggests that an additional adjustment should be considered for oncology and HCT units within general hospitals, perhaps similar to what is done for ICU beds in the current SIR.
Cyber Operational Risk: Cyber risk is routinely cited as one of the most important sources of operational risks facing organisations today, in various publications and surveys. Further, in recent years, cyber risk has entered the public conscience through highly publicised events involving affected UK organisations such as TalkTalk, Morrisons and the NHS. Regulators and legislators are increasing their focus on this topic, with General Data Protection Regulation (“GDPR”) a notable example of this. Risk actuaries and other risk management professionals at insurance companies therefore need to have a robust assessment of the potential losses stemming from cyber risk that their organisations may face. They should be able to do this as part of an overall risk management framework and be able to demonstrate this to stakeholders such as regulators and shareholders. Given that cyber risks are still very much new territory for insurers and there is no commonly accepted practice, this paper describes a proposed framework in which to perform such an assessment. As part of this, we leverage two existing frameworks – the Chief Risk Officer (“CRO”) Forum cyber incident taxonomy, and the National Institute of Standards and Technology (“NIST”) framework – to describe the taxonomy of a cyber incident, and the relevant cyber security and risk mitigation items for the incident in question, respectively.Summary of Results: Three detailed scenarios have been investigated by the working party:
∙ Employee leaks data at a general (non-life) insurer: Internal attack through social engineering, causing large compensation costs and regulatory fines, driving a 1 in 200 loss of £210.5m (c. 2% of annual revenue).
∙ Cyber extortion at a life insurer: External attack through social engineering, causing large business interruption and reputational damage, driving a 1 in 200 loss of £179.5m (c. 6% of annual revenue).
∙ Motor insurer telematics device hack: External attack through software vulnerabilities, causing large remediation / device replacement costs, driving a 1 in 200 loss of £70.0m (c. 18% of annual revenue).
Limitations: The following sets out key limitations of the work set out in this paper:
∙ While the presented scenarios are deemed material at this point in time, the threat landscape moves fast and could render specific narratives and calibrations obsolete within a short-time frame.
∙ There is a lack of historical data to base certain scenarios on and therefore a high level of subjectivity is used to calibrate them.
∙ No attempt has been made to make an allowance for seasonality of renewals (a cyber event coinciding with peak renewal season could exacerbate cost impacts)
∙ No consideration has been given to the impact of the event on the share price of the company.
∙ Correlation with other risk types has not been explicitly considered.
Conclusions: Cyber risk is a very real threat and should not be ignored or treated lightly in operational risk frameworks, as it has the potential to threaten the ongoing viability of an organisation. Risk managers and capital actuaries should be aware of the various sources of cyber risk and the potential impacts to ensure that the business is sufficiently prepared for such an event. When it comes to quantifying the impact of cyber risk on the operations of an insurer there are significant challenges. Not least that the threat landscape is ever changing and there is a lack of historical experience to base assumptions off. Given this uncertainty, this paper sets out a framework upon which readers can bring consistency to the way scenarios are developed over time. It provides a common taxonomy to ensure that key aspects of cyber risk are considered and sets out examples of how to implement the framework. It is critical that insurers endeavour to understand cyber risk better and look to refine assumptions over time as new information is received. In addition to ensuring that sufficient capital is being held for key operational risks, the investment in understanding cyber risk now will help to educate senior management and could have benefits through influencing internal cyber security capabilities.
Although most hospitals report very high levels of hand hygiene compliance (HHC), the accuracy of these overtly observed rates is questionable due to the Hawthorne effect and other sources of bias. In the study, we aimed (1) to compare HHC rates estimated using the standard audit method of overt observation by a known observer and a new audit method that employed a rapid (<15 minutes) “secret shopper” method and (2) to pilot test a novel feedback tool.
Design
Quality improvement project using a quasi-experimental stepped-wedge design.
Setting
This study was conducted in 5 acute-care hospitals (17 wards, 5 intensive care units) in the Midwestern United States.
Methods
Sites recruited a hand hygiene observer from outside the acute-care units to rapidly and covertly observe entry and exit HHC during the study period, October 2016–September 2017. After 3 months of observations, sites received a monthly feedback tool that communicated HHC information from the new audit method.
Results
The absolute difference in HHC estimates between the standard and new audit methods was ~30%. No significant differences in HHC were detected between the baseline and feedback phases (OR, 0.92; 95% CI, 0.84–1.01), but the standard audit method had significantly higher estimates than the new audit method (OR, 9.83; 95% CI, 8.82–10.95).
Conclusions
HHC estimates obtained using the new audit method were substantially lower than estimates obtained using the standard audit method, suggesting that the rapid, secret-shopper method is less subject to bias. Providing feedback using HHC from the new audit method did not seem to impact HHC behaviors.
To evaluate the impact of discontinuing routine contact precautions (CP) for endemic methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococcus (VRE) on hospital adverse events.
Academic medical center with single-occupancy rooms.
PARTICIPANTS
Inpatients.
METHODS
We compared hospital reportable adverse events 1 year before and 1 year after discontinuation of routine CP for endemic MRSA and VRE (preintervention and postintervention periods, respectively). Throughout the preintervention period, daily chlorhexidine gluconate bathing was expanded to nearly all inpatients. Chart reviews were performed to identify which patients and events were associated with CP for MRSA/VRE in the preintervention period as well as the patients that would have met prior criteria for MRSA/VRE CP but were not isolated in the postintervention period. Adverse events during the 2 periods were compared using segmented and mixed-effects Poisson regression models.
RESULTS
There were 24,732 admissions in the preintervention period and 25,536 in the postintervention period. Noninfectious adverse events (ie, postoperative respiratory failure, hemorrhage/hematoma, thrombosis, wound dehiscence, pressure ulcers, and falls or trauma) decreased by 19% (12.3 to 10.0 per 1,000 admissions, P=.022) from the preintervention to the postintervention period. There was no significant difference in the rate of infectious adverse events after CP discontinuation (20.7 to 19.4 per 1,000 admissions, P=.33). Patients with MRSA/VRE showed the largest reduction in noninfectious adverse events after CP discontinuation, with a 72% reduction (21.4 to 6.08 per 1,000 MRSA/VRE admissions; P<.001).
CONCLUSION
After discontinuing routine CP for endemic MRSA/VRE, the rate of noninfectious adverse events declined, especially in patients who no longer required isolation. This suggests that elimination of CP may substantially reduce noninfectious adverse events.
From 2000 to 2009, rates of multidrug-resistant Acinetobacter baumanii increased 10-fold to 0.2 per 1,000 patient days. From 2010 to 2015, however, rates markedly declined and have stayed below 0.05 per 1,000 patient days. Herein, we present a 15-year trend analysis and discuss interventions that may have led to the decline.
Primary progressive aphasia (PPA) affects a range of language and cognitive domains that impact on conversation. Little is known about conversation breakdown in the semantic variant of PPA (svPPA, also known as semantic dementia). This study investigates conversation of people with svPPA.
Methods:
Dyadic conversations about everyday activities between seven individuals with svPPA and their partners, and seven control pairs were video recorded and transcribed. Number of words, turns, and length of turns were measured. Trouble-indicating behaviors (TIBs) and repair behaviors were categorized and identified as successful or not for each participant in each dyad.
Results:
In general, individuals with svPPA were active participants in conversation, taking an equal proportion of turns, but indicating a great deal of more trouble in conversation, shown by the significantly higher number of TIBs than evidenced by partners or control participants. TIBs were interactive (asking for confirmation with a shorter repetition of the original utterance or a repetition which included a request for specific information) and non-interactive (such as failing to take up or continue the topic or a minimal response) and unlike those previously reported for people with other PPA variants and dementia of the Alzheimer type. Communication behaviors of the partner were critical to conversational success.
Conclusions:
Examination of trouble and repair in 10-min conversations of individuals with svPPA and their important communication partners has potential to inform speech pathology interventions to enhance successful conversation, in svPPA and should be an integral part of the comprehensive care plan.
Objectives: The current study examines whether psychosocial outcomes following pediatric traumatic brain injury (TBI) vary as a function of children’s rejection sensitivity (RS), defined as their disposition to be hypersensitive to cues of rejection from peers. Methods: Children ages 8–13 with a history of severe TBI (STBI, n=16), complicated mild/moderate TBI (n=35), or orthopedic injury (OI, n=49) completed measures assessing self-esteem and RS on average 3.28 years post-injury (SD=1.33, range=1.25–6.34). Parents reported on their child’s emotional and behavioral functioning and social participation. Results: Regression analyses found moderation of group differences by RS for three outcomes: social participation, self-perceptions of social acceptance, and externalizing behavior problems. Conditional effects at varying levels of RS indicated that externalizing problems and social participation were significantly worse for children with STBI at high levels of RS, compared to children with OI. Social participation for the STBI group remained significantly lower than the OI group at mean levels of RS, but not at low levels of RS. At high levels of RS, self-perceptions of social acceptance were lower for children with moderate TBI compared to OI, but group differences were not significant at mean or low levels of RS. No evidence of moderation was found for global self-worth, self-perceptions of physical appearance or athletic ability, or internalizing problems. Conclusions: The findings highlight the salient nature of social outcomes in the context of varying levels of RS. These findings may have implications for the design of interventions to improve social outcomes following TBI. (JINS, 2017, 23, 451–459)
Environmental exposures during pregnancy may increase breast cancer risk for mothers and female offspring. Tumor tissue assays may provide insight regarding the mechanisms. This study assessed the feasibility of obtaining tumor samples and pathology reports from mothers (F0) who were enrolled in the Child Health and Development Studies during pregnancy from 1959 to 1967 and their daughters (F1) who developed breast cancer over more than 50 years of follow-up. Breast cancer cases were identified through linkage to the California Cancer Registry and self-report. Written consent was obtained from 116 F0 and 95 F1 breast cancer survivors to access their pathology reports and tumor blocks. Of those contacted, 62% consented, 13% refused and 24% did not respond. We obtained tissue samples for 57% and pathology reports for 75%, and if diagnosis was made ⩽10 years we obtained tissue samples and pathology reports for 91% and 79%, respectively. Obtaining pathology reports and tumor tissues of two generations is feasible and will support investigation of the relationship between early-life exposures and molecular tumor markers. However, we found that more recent diagnosis increased the accessibility of tumor tissue. We recommend that cohorts request consent for obtaining future tumor tissues at study enrollment and implement real-time tissue collection to enhance success of collecting tumor samples and data.
Recent studies regarding non-suicidal self-injury (NSSI) among adolescents have focused primarily on individual characteristics (e.g., depressive symptoms) and background factors (e.g., parental relationship), whereas less emphasis has been given to the role of school-related factors in NSSI. Therefore, the purpose of the current study was to explore the relationships between teachers’ support, peer climate, and NSSI within the school context.
Methods
The sample consisted of 594 high school students nested within 27 regular classes (54.4% boys; mean age 14.96, SD = 1.33 years). The students were evaluated for NSSI behaviors, perception of teacher support, peer climate, relationships with mothers, and depressive symptoms using validated scales.
Results
The primary analysis used hierarchical linear modeling (HLM), controlling for gender and age. The main findings indicated that teacher support was positively associated with NSSI at the classroom-level (OR = 6.15, 95% CI = 2.05–18.5) but negatively associated at the student-level (OR = 0.66, 95% CI = 0.49–0.89). There was a trend toward an association between positive peer climate and NSSI at the classroom-level (OR = 0.43, 95% CI = 0.18–1.05), while negative peer climate was associated with NSSI at the student-level (OR = 1.37, 95% CI = 1.00–1.87).
Conclusions
School-related factors are associated with NSSI behaviors among students. Teachers and educators should focus on both individual-level and classroom-level perceptions of school context. Students who feel supported by their teachers and who are exposed to a positive peer climate are less likely to engage in NSSI.
In situ Pleistocene reefs form a gently sloping nearshore terrace around the island of Oahu. TIMS Th–U ages of in situ corals indicate that most of the terrace is composed of reefal limestones correlating to Marine Oxygen Isotope Stage 7 (MIS 7, ~ 190–245 ka). The position of the in situ MIS 7 reef complex indicates that it formed during periods when local sea level was ~ 9 to 20 m below present sea level. Its extensiveness and geomorphic prominence as well as a paucity of emergent in situ MIS 7 reef-framework deposits on Oahu suggest that much of MIS 7 was characterized by regional sea levels below present. Later accretion along the seaward front of the terrace occurred during the latter part of MIS 5 (i.e., MIS 5a–5d, ~ 76–113 ka). The position of the late MIS 5 reefal limestones is consistent with formation during a period when local sea level was below present. The extensiveness of the submerged Pleistocene reefs around Oahu compared to the relative dearth of Holocene accretion is due to the fact that Pleistocene reefs had both more time and more accommodation space available for accretion than their Holocene counterparts.
A substantially modified history of the last two cycles of Lake Bonneville is proposed. The Bonneville lake cycle began prior to 26,000 yr B.P.; the lake reached the Bonneville shoreline about 16,000 yr B.P. Poor dating control limits our knowledge of the timing of subsequent events. Lake level was maintained at the Bonneville shoreline until about 15,000 yr B.P., or somewhat later, when catastrophic downcutting of the outlet caused a rapid drop of 100 m. The Provo shoreline was formed as rates of isostatic uplift due to this unloading slowed. By 13,000 yr B.P., the lake had fallen below the Provo level and reached one close to that of Great Salt Lake by 11,000 yr B.P. Deposits of the Little Valley lake cycle are identified by their position below a marked unconformity and by amino acid ratios of their fossil gastropods. The maximum level of the Little Valley lake was well below the Bonneville shoreline. Based on degree of soil development and other evidence, the Little Valley lake cycle may be equivalent in age to marine oxygenisotope stage 6. The proposed lake history has climatic implications for the region. First, because the fluctuations of Lake Bonneville and Lake Lahontan during the last cycle of each were apparently out of phase, there may have been significant local differences in the timing and character of late Pleistocene climate changes in the Great Basin. Second, although the Bonneville and Little Valley lake cycles were broadly synchronous with maximum episodes of glaciation, environmental conditions necessary to generate large lakes did not exist during early Wisconsin time.