We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The COVID-19 pandemic presents a remarkable opportunity to put to work all of the research that has been undertaken in past decades on the elicitation and structural estimation of subjective belief distributions as well as preferences over atemporal risk, patience, and intertemporal risk. As contributors to elements of that research in laboratories and the field, we drew together those methods and applied them to an online, incentivized experiment in the United States. We have two major findings. First, the atemporal risk premium during the COVID-19 pandemic appeared to change significantly compared to before the pandemic, consistent with theoretical results of the effect of increased background risk on foreground risk attitudes. Second, subjective beliefs about the cumulative level of deaths evolved dramatically over the period between May and November 2020, a volatile one in terms of the background evolution of the pandemic.
We convey our experiences developing and implementing an online experiment to elicit subjective beliefs and economic preferences. The COVID-19 pandemic and associated closures of our laboratories required us to conduct an online experiment in order to collect beliefs and preferences associated with the pandemic in a timely manner. Since we had not previously conducted a similar multi-wave online experiment, we faced design and implementation considerations that are not present when running a typical laboratory experiment. By discussing these details more fully, we hope to contribute to the online experiment methodology literature at a time when many other researchers may be considering conducting an online experiment for the first time. We focus primarily on methodology; in a complementary study we focus on initial research findings.
OBJECTIVES/GOALS: To identify an electroencephalographic (EEG) signature of SOR in adults with TS METHODS/STUDY POPULATION: We will recruit 60 adults with CTD and 60 sex- and age-matched healthy controls to complete scales assessing severity of SOR (Sensory Gating Inventory, SGI), tics, and psychiatric symptoms. Subjects will then be monitored on dense-array scalp EEG during sequential auditory and tactile sensory gating paradigms, as such paradigms have been shown to correlate with self-report measures of SOR in other populations. Single-trial EEG data will be segmented into 100-ms epochs and spectrally deconvoluted into standard frequency bands (delta, theta, alpha, beta, gamma) for pre-defined regions of interest. We will conduct between-group contrasts (Wilcoxon rank-sum) of band-specific sensory gating indices and within-group correlations (Spearman rank correlations) between sensory gating indices and SGI scores. RESULTS/ANTICIPATED RESULTS: We hypothesize that, relative to controls, adults with CTD exhibit impaired sensory gating and that extent of impairment correlates with severity of SOR. 14 adults with CTD (9 men, 5 women) and 16 controls (10 men, 6 women) have completed the protocol to date. Within this sample, adults with CTD showed significantly reduced sensory gating compared to controls in frontal (CTD median 0.12 dB (interquartile range -0.15–0.70 dB); control -0.37 dB (-0.80–-0.13 dB); p = 0.01) and parietal (CTD 0.17 dB (-0.08–0.50 dB); control -0.20 dB (-0.43–0.10 dB); p = 0.01) gamma band during the 100-200 ms epoch in the tactile paradigm. No significant between-group differences were evident for the auditory paradigm. Among adults with CTD, multiple sensory gating indices significantly correlated with SGI scores. Enrollment continues. DISCUSSION/SIGNIFICANCE: Results aim to clarify the extent of sensory gating impairment in TS and identify a clinical correlate of neurophysiologic dysfunction in the disorder. Such knowledge has direct implications for identification of candidate neurophysiologic biomarkers, an express goal of the National Institutes of Health.
School counselling has the potential to deliver significant support for the wellbeing of children. However, much of the research on school counsellors has been conducted in developed Western countries, with very limited research into factors influencing the effectiveness of counsellors in lower middle-income countries or in Asia. The aim of this qualitative study was to investigate the perceptions of Filipino counsellors about their roles, and factors that supported or impeded their effectiveness. Seventeen school counsellors in the Philippines were interviewed, and the data were analysed thematically. Our findings suggest that Filipino school counsellors often carry out dual roles, experience a lack of role clarity, and are systemically disempowered in their schools. Relationships with school principals have a significant influence on counsellors’ roles and positioning in schools, and therefore on their effectiveness. The ability of principals to foster a school ethos supportive of counselling is essential in enabling counsellors to leverage the multifunctional nature of their work, become embedded and centrally positioned in the school community, and enhance their effectiveness. Doing so can enable counselling to be more culturally accessible to young people.
This article presents an empirical study of six grievance mechanisms in multi-stakeholder initiatives (MSIs). It argues that key characteristics of each grievance mechanism as well as the contexts in which they operate significantly affect human rights outcomes. However, even the most successful mechanisms only manage to produce remedies in particular types of cases and contexts. The research also finds that it is prohibitively difficult to determine whether ‘effective’ remedy has been achieved in individual cases. Furthermore, the key intervention by the UN Guiding Principles on Business and Human Rights (UNGPs), to prescribe a set of effectiveness criteria for designing or revising MSI grievance mechanisms, itself appears ineffective in stimulating better outcomes for rights-holders. Drawing on these findings, the article reflects on the future potential and limitations of MSI grievance mechanisms within broader struggles to ensure business respect for human rights.
In 2017 the Scottish Government passed the Child Poverty (Scotland) Act with the commitment to significantly reduce the relative child poverty rate from the current prevailing level of around 25% to 10% by 2030/31. In response, the government introduced the Scottish Child Payment (SCP) that provides a direct transfer to households at a fixed rate per eligible child – currently £25 per week. In this paper we explore, using a micro to macro modelling approach, the effectiveness of using the SCP to achieve the Scottish child poverty targets. While we find that the ambitious child poverty targets can technically be met solely using the SCP, the necessary payment of £165 per week amounting to a total government cost of £3 billion per year, makes the political and economy-wide barriers significant. A key issue with only using the SCP is the non-linearity in the response to the payment; as the payment increases, the marginal gain in the reduction of child poverty decreases – this is particularly evident after payments of £80 per week. A ‘policy-mix’ option combining the SCP, targeted cash transfers and other policy levels (such as childcare provision) seems the most promising approach to reaching the child poverty targets.
This paper outlines frameworks to use for reserving validation and gives the reader an overview of current techniques being employed. In the experience of the authors, many companies lack an embedded reserve validation framework and reserve validation can appear piecemeal and unstructured. The paper outlines a case study demonstrating how successful machine learning techniques will become and then goes on to discuss the implications of machine learning to the future of reserving departments, processes, data and validation techniques. Reserving validation can take many forms, from simple checks to full independent reviews to add value to the reserving process, enhance governance and increase confidence in and reliability in results. This paper discusses covers common weaknesses and their solutions and suggestions of a framework in which to apply validation tools. The impacts of the COVID-19 pandemic on reserving validation is also covered as are early warning indicators and the topic of IFRS 17 from the standpoint of reserving validation. The paper looks at the future for reserving validation and discusses the data challenges that need overcoming on the path to embedded reserving process validation.
Background: Infections are a frequent cause of hospital (re)admissions for older adults receiving home health care (HHC) in the United States. However, previous investigators have likely underestimated the prevalence of infections leading to hospitalization due to limitations of identifying infections using Outcome and Assessment Information Set (OASIS), the standardized assessment tool mandated for all Medicare-certified HHC agencies. By linking OASIS data with inpatient data from the Medicare Provider Analysis and Review (MedPAR) file, we were able to better quantify infection hospitalization trends and subsequent mortality among HHC patients. Method: After stratification (by census region, ownership, and urban or rural location) and random sampling, our data set consisted of 2,258,113 Medicare beneficiaries who received HHC services between January 1, 2013, and December 31, 2018, from 1,481 Medicare-certified HHC agencies. The 60-day HHC episodes were identified in OASIS. Hospital transfers reported in OASIS were linked with corresponding MedPAR records. Our outcomes of interest were (1) hospitalization with infection present on admission (POA); (2) hospitalization with infection as the primary cause; and (3) 30-day mortality following hospitalization with infection as the primary cause. We identified bacterial (including suspected) infections based on International Classification of Disease, Ninth Revision (ICD-9) and ICD-10 codes in MedPAR. We classified infections by site: respiratory, urinary tract, skin/soft tissue, intravenous catheter-related, and all (including other or unspecified infection site). We also identified sepsis diagnoses. Result: From 2013 through 2018, the percentage of 60-day HHC episodes with 1 or more hospital transfers ranged from 15% to 16%. Approximately half of all HHC patients hospitalized had an infection POA. Over the 6 years studied, infection (any type) was the primary cause of hospitalization in more than a quarter of all transfers (25.86%–27.57%). The percentage of hospitalizations due to sepsis increased from 7.51% in 2013 to 11.49% in 2018, whereas the percentage of hospitalizations due to respiratory, urinary tract, or skin/soft-tissue infections decreased (p <0.001). Thirty-day mortality following a transfer due to infection ranged from 14.14% in 2013 to 14.98% in 2018; mortality rates were highest following transfers caused by sepsis (23.14%-26.51%) and respiratory infections (13.07%-14.27%). Conclusion: HHC is an important source of post-acute care for those aging in place. Our findings demonstrate that infections are a persistent problem in HHC and are associated with substantial 30-day mortality, particularly following hospitalizations caused by sepsis, emphasizing the importance of infection prevention in HHC. Effective policies to promote best practices for infection prevention and control in the home environment are needed to mitigate infection risk.
The Patient Health Questionnaire-9 (PHQ-9) is a widely used measure of depression in primary care. It was, however, originally designed as a diagnostic screening tool, and not for measuring change in response to antidepressant treatment. Although the Quick Inventory of Depressive Symptomology (QIDS-SR-16) has been extensively validated for outcome measurement, it is poorly adopted in UK primary care, and, although free for clinicians, has licensing restrictions for healthcare organisation use.
Aims
We aimed to develop a modified version of the PHQ-9, the Maudsley Modified PHQ-9 (MM-PHQ-9), for tracking symptom changes in primary care. We tested the measure's validity, reliability and factor structure.
Method
A sample of 121 participants was recruited across three studies, and comprised 78 participants with major depressive disorder and 43 controls. MM-PHQ-9 scores were compared with the QIDS-SR-16 and Clinical Global Impressions improvement scale, for concurrent validity. Internal consistency of the scale was assessed, and principal component analysis was conducted to determine the items’ factor structure.
Results
The MM-PHQ-9 demonstrated good concurrent validity with the QIDS-SR-16, and excellent internal consistency. Sensitivity to change over a 14-week period was d = 0.41 compared with d = 0.61 on the QIDS-SR-16. Concurrent validity between the paper and mobile app versions of the MM-PHQ-9 was r = 0.67.
Conclusions
These results indicate that the MM-PHQ-9 is a valid and reliable measure of depressive symptoms in paper and mobile app format, although further validation is required. The measure was sensitive to change, demonstrating suitability for use in routine outcome assessment.
This article explores different approaches to assessing the effectiveness of non-state-based non-judicial grievance mechanisms (NSBGMs) in achieving access to remedy for rightsholders. It queries the approach that has been widely adopted as a result of the United Nations Guiding Principles on Business and Human Rights (UNGPs), which focuses on the procedural aspects of grievance mechanisms. Rather, it stresses the importance of analysing the outcomes of cases for rightsholders. This article tests this hypothesis by undertaking comprehensive empirical research into the complaint mechanism of the Roundtable on Sustainable Palm Oil (RSPO). RSPO is found to perform well when judged according to the UNGPs’ effectiveness criteria. However, it performs poorly when individual cases are assessed to ascertain the outcomes that are achieved for rightsholders. The article therefore argues for the importance of equivalent scrutiny of outcomes in relation to other NSBGMs and provides an approach and accompanying methodology that can be utilized for that purpose.
Most techniques for pollen-based quantitative climate reconstruction use modern assemblages as a reference data set. We examine the implication of methodological choices in the selection and treatment of the reference data set for climate reconstructions using Weighted Averaging Partial Least Squares (WA-PLS) regression and records of the last glacial period from Europe. We show that the training data set used is important because it determines the climate space sampled. The range and continuity of sampling along the climate gradient is more important than sampling density. Reconstruction uncertainties are generally reduced when more taxa are included, but combining related taxa that are poorly sampled in the data set to a higher taxonomic level provides more stable reconstructions. Excluding taxa that are climatically insensitive, or systematically overrepresented in fossil pollen assemblages because of known biases in pollen production or transport, makes no significant difference to the reconstructions. However, the exclusion of taxa overrepresented because of preservation issues does produce an improvement. These findings are relevant not only for WA-PLS reconstructions but also for similar approaches using modern assemblage reference data. There is no universal solution to these issues, but we propose a number of checks to evaluate the robustness of pollen-based reconstructions.
Residual strain in electrodeposited Li films may affect safety and performance in Li metal battery anodes, so it is important to understand how to detect residual strain in electrodeposited Li and the conditions under which it arises. To explore this Li films, electrodeposited onto Cu metal substrates, were prepared under an applied pressure of either 10 or 1000 kPa and subsequently tested for the presence or absence of residual strain via sin2(ψ) analysis. X-ray diffraction (XRD) analysis of Li films required preparation and examination within an inert environment; hence, a Be-dome sample holder was employed during XRD characterization. Results show that the Li film grown under 1000 kPa displayed a detectable presence of in-plane compressive strain (−0.066%), whereas the Li film grown under 10 kPa displayed no detectable in-plane strain. The underlying Cu substrate revealed an in-plane residual strain near zero. Texture analysis via pole figure determination was also performed for both Li and Cu and revealed a mild fiber texture for Li metal and a strong bi-axial texture of the Cu substrate. Experimental details concerning sample preparation, alignment, and analysis of the particularly air-sensitive Li films have also been detailed. This work shows that Li metal exhibits residual strain when electrodeposited under compressive stress and that XRD can be used to quantify that strain.
In 1817–21, the Indian subcontinent was ravaged by a series of epidemics which marked the beginning of what has since become known as the First Cholera Pandemic. Despite their far-reaching consequences, these epidemics have received remarkably little attention and have never been considered as historical subjects in their own right. This article examines the epidemics of 1817–21 in greater detail and assesses their significance for the social and political history of the Indian subcontinent. Additionally, it examines the meanings that were attached to the epidemics in the years running up to the first appearance of cholera in the West. In so doing, the article makes comparisons between responses to cholera in India and in other contexts, and tests the applicability of concepts used in the study of epidemics in the West. It is argued that the official reaction to cholera in India was initially ameliorative, in keeping with the East India Company's response to famines and other supposedly natural disasters. However, this view was gradually supplemented and replaced by a view of cholera as a social disease, requiring preventive action. These views were initially rejected in Britain, but found favour after cholera epidemics in 1831–32. Secondly, in contrast to later epidemics, it is argued that those of 1817–21 did little to exacerbate tensions between rulers and the ruled. On the rare occasions when cholera did elicit a violent reaction, it tended to be intra-communal rather than anti-colonial in nature.
Rapeseed is a popular cover crop choice due to its deep-growing taproot, which creates soil macropores and increases water infiltration. Brassicaceae spp. that are mature or at later growth stages can be troublesome to control. Experiments were conducted in Delaware and Virginia to evaluate herbicides for terminating rapeseed cover crops. Two separate experiments, adjacent to each other, were established to evaluate rapeseed termination by 14 herbicide treatments at two timings. Termination timings included an early and late termination to simulate rapeseed termination prior to planting corn and soybean, respectively, for the region. At three locations where rapeseed height averaged 12 cm at early termination and 52 cm at late termination, glyphosate + 2,4-D was most effective, controlling rapeseed 96% 28 d after early termination (DAET). Paraquat + atrazine + mesotrione (92%), glyphosate + saflufenacil (91%), glyphosate + dicamba (91%), and glyphosate (86%) all provided at least 80% control 28 DAET. Rapeseed biomass followed a similar trend. Paraquat + 2,4-D (85%), glyphosate + 2,4-D (82%), and paraquat + atrazine + mesotrione (81%) were the only treatments that provided at least 80% control 28 d after late termination (DALT). Herbicide efficacy was less at Painter in 2017, where rapeseed height was 41 cm at early termination, and 107 cm at late termination. No herbicide treatments controlled rapeseed >80% 28 DAET or 28 DALT at this location. Herbicide termination of rapeseed is best when the plant is small; termination of large rapeseed plants may require mechanical of other methods beyond herbicides.
Child welfare policy making is a highly contested area in public policy. Child abuse scandals prompt critical appraisals of parents, professionals and the child protection system creating a tipping point for reform. One hundred and six transcripts of debates in the West Australian Parliament from August until December 2006 relating to child welfare and child deaths were analysed using qualitative content analysis. The analysis found that statistics about child deaths were conflated with other levels of childhood vulnerability promoting blame, fear, risk and an individual responsibility theme. The key rhetorical strategy was the use of numbers to generate emotion, credibility and authority to frame child maltreatment narrowly as a moral crime. Rhetoric and emotions is about telling causal stories and will remain ubiquitous in social policy making. So, in order to guide policy debate and creation, ground their claims and manage ambiguity and uncertainty, policy makers, researchers and practitioners working with complex social issues will do well to step into this public and political discourse and be strategic in shaping more nuanced alternative frames.
Emergency admissions to hospital are a major financial burden on health services. In one area of the United Kingdom (UK), we evaluated a predictive risk stratification tool (PRISM) designed to support primary care practitioners to identify and manage patients at high risk of admission. We assessed the costs of implementing PRISM and its impact on health services costs. At the same time as the study, but independent of it, an incentive payment (‘QOF’) was introduced to encourage primary care practitioners to identify high risk patients and manage their care.
METHODS:
We conducted a randomized stepped wedge trial in thirty-two practices, with cluster-defined control and intervention phases, and participant-level anonymized linked outcomes. We analysed routine linked data on patient outcomes for 18 months (February 2013 – September 2014). We assigned standard unit costs in pound sterling to the resources utilized by each patient. Cost differences between the two study phases were used in conjunction with differences in the primary outcome (emergency admissions) to undertake a cost-effectiveness analysis.
RESULTS:
We included outcomes for 230,099 registered patients. We estimated a PRISM implementation cost of GBP0.12 per patient per year.
Costs of emergency department attendances, outpatient visits, emergency and elective admissions to hospital, and general practice activity were higher per patient per year in the intervention phase than control phase (adjusted δ = GBP76, 95 percent Confidence Interval, CI GBP46, GBP106), an effect that was consistent and generally increased with risk level.
CONCLUSIONS:
Despite low reported use of PRISM, it was associated with increased healthcare expenditure. This effect was unexpected and in the opposite direction to that intended. We cannot disentangle the effects of introducing the PRISM tool from those of imposing the QOF targets; however, since across the UK predictive risk stratification tools for emergency admissions have been introduced alongside incentives to focus on patients at risk, we believe that our findings are generalizable.
A predictive risk stratification tool (PRISM) to estimate a patient's risk of an emergency hospital admission in the following year was trialled in general practice in an area of the United Kingdom. PRISM's introduction coincided with a new incentive payment (‘QOF’) in the regional contract for family doctors to identify and manage the care of people at high risk of emergency hospital admission.
METHODS:
Alongside the trial, we carried out a complementary qualitative study of processes of change associated with PRISM's implementation. We aimed to describe how PRISM was understood, communicated, adopted, and used by practitioners, managers, local commissioners and policy makers. We gathered data through focus groups, interviews and questionnaires at three time points (baseline, mid-trial and end-trial). We analyzed data thematically, informed by Normalisation Process Theory (1).
RESULTS:
All groups showed high awareness of PRISM, but raised concerns about whether it could identify patients not yet known, and about whether there were sufficient community-based services to respond to care needs identified. All practices reported using PRISM to fulfil their QOF targets, but after the QOF reporting period ended, only two practices continued to use it. Family doctors said PRISM changed their awareness of patients and focused them on targeting the highest-risk patients, though they were uncertain about the potential for positive impact on this group.
CONCLUSIONS:
Though external factors supported its uptake in the short term, with a focus on the highest risk patients, PRISM did not become a sustained part of normal practice for primary care practitioners.