We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As the population ages, the provision of adult long-term care (LTC) is one of the major challenges facing the UK and other developed nations. LTC funding for the elderly is complex, reflecting the range and level of services provided, with the total cost depending on the duration of LTC required. Institutional care settings (e.g., nursing/residential care homes) represent the most expensive form of LTC. Planning and funding for institutional LTC requires an understanding of the factors affecting the mortality (and hence duration and cost of care) of such LTC recipients. Using data provided by Bupa, one of the largest LTC providers in Britain, this paper investigates factors affecting the mortality of residents of institutional LTC facilities over the period 2016-2019. Consistent with existing research, most residents were female and had a higher average age profile compared with male residents. For those residents who died during the investigation period, the average length of stay was approximately 1.6 times longer for females relative to males. For both males and females, new residents experienced higher mortality in the first-year post admission compared to existing residents. Variations in the mortality of the residents were analysed by condition, funding status and care type on admission.
Edited by
David Mabey, London School of Hygiene and Tropical Medicine,Martin W. Weber, World Health Organization,Moffat Nyirenda, London School of Hygiene and Tropical Medicine,Dorothy Yeboah-Manu, Noguchi Memorial Institute for Medical Research, University of Ghana,Jackson Orem, Uganda Cancer Institute, Kampala,Laura Benjamin, University College London,Michael Marks, London School of Hygiene and Tropical Medicine,Nicholas A. Feasey, Liverpool School of Tropical Medicine
Varicella (chickenpox) is an acute highly infectious disease, with outbreaks in schools and emergency settings. The public health impact of herpes zoster (reactivated varicella infection) has increased in areas of Africa with high HIV prevalence.
We combine newly collected election data with records of public denials of the results of the 2020 election to estimate the degree to which election-denying Republican candidates over- or underperformed other Republicans in 2022 in statewide and federal elections. We find that the average vote share of election-denying Republicans in statewide races was approximately 3.2 percentage points lower than their co-partisans after accounting for state-level partisanship. However, we find no such underperformance on aggregate for U.S. House elections, perhaps due to the more-partisan nature of many House districts. Together, the results suggest that the types of candidates in American elections who take more-extreme positions tend to underperform, but that these performance gaps are relatively small in the present, polarized political environment.
Medical resuscitations in rugged prehospital settings require emergency personnel to perform high-risk procedures in low-resource conditions. Just-in-Time Guidance (JITG) utilizing augmented reality (AR) guidance may be a solution. There is little literature on the utility of AR-mediated JITG tools for facilitating the performance of emergent field care.
Study Objective:
The objective of this study was to investigate the feasibility and efficacy of a novel AR-mediated JITG tool for emergency field procedures.
Methods:
Emergency medical technician-basic (EMT-B) and paramedic cohorts were randomized to either video training (control) or JITG-AR guidance (intervention) groups for performing bag-valve-mask (BVM) ventilation, intraosseous (IO) line placement, and needle-decompression (Needle-d) in a medium-fidelity simulation environment. For the interventional condition, subjects used an AR technology platform to perform the tasks. The primary outcome was participant task performance; the secondary outcomes were participant-reported acceptability. Participant task score, task time, and acceptability ratings were reported descriptively and compared between the control and intervention groups using chi-square analysis for binary variables and unpaired t-testing for continuous variables.
Results:
Sixty participants were enrolled (mean age 34.8 years; 72% male). In the EMT-B cohort, there was no difference in average task performance score between the control and JITG groups for the BVM and IO tasks; however, the control group had higher performance scores for the Needle-d task (mean score difference 22%; P = .01). In the paramedic cohort, there was no difference in performance scores between the control and JITG group for the BVM and Needle-d tasks, but the control group had higher task scores for the IO task (mean score difference 23%; P = .01). For all task and participant types, the control group performed tasks more quickly than in the JITG group. There was no difference in participant usability or usefulness ratings between the JITG or control conditions for any of the tasks, although paramedics reported they were less likely to use the JITG equipment again (mean difference 1.96 rating points; P = .02).
Conclusions:
This study demonstrated preliminary evidence that AR-mediated guidance for emergency medical procedures is feasible and acceptable. These observations, coupled with AR’s promise for real-time interaction and on-going technological advancements, suggest the potential for this modality in training and practice that justifies future investigation.
Three authigenic muscovite morphologies are associated with Norphlet Formation stylolitization observed in the Texaco Mobile Area Block 872 #1 well: l)large crystals of 1M muscovite, which grew in the stylolites with their c-axes parallel to the plane of maximum compressive stress; 2) fine-grained bundles of muscovite that occur as pore-fillings near stylolites; and 3) pods of fine-grained muscovite that exist within stylolite insoluble residue and that were precipitated as pore-filling muscovite before the host sandstone pressolved.
The population of large crystals of 1M muscovite grew at 51 ± 9 Ma, pore-filling muscovites precipitated at 77 ± 22 Ma, and muscovite pods have ages of 86 ± 16 Ma, as indicated by 40Ar/39Ar laser fusion. Apparent ages indicate that stylolitization was coincident with the beginning of organic maturation Zone 5 and could be the product of reservoir fluid pressure fluctuations induced by gas leakage. The lower Smackover Formation source/seal rock, acting as a pressure relief valve, could have been compromised by microfractures occurring during hydrocarbon generation and expulsion. Decreases in reservoir fluid pressure would have acted upon the sandstone framework by increasing the effective overburden pressure, thus making the rock more susceptible to pressure solution.
Stylolite frequency and quartz cement volume increase in the finer grained portion of the conventional core. Quartz cement volume correlates inversely to percent sandstone porosity. Apparent muscovite ages indicate that stylolitization occurred after hydrocarbon migration. Silica mobility was limited because pressure solution mineral products were precipitated from within grain films of irreducible water within the sandstone.
Stylolitization of quartz grains accounts for a minimum of 34% of the quartz cement in the upper cored section of the Norphlet Formation and minimum of 17% of the quartz cement in the lower cored Norphlet Formation. Quartz cement volumes are based on stylolite insoluble residue thickness and weight measurements of pyrobitumen within and nearby the insoluble residue seams. Stylolitization of K-feldspar and precipitation of muscovite can release additional silica which may have precipitated as quartz cement.
To explore the usefulness of the Lowenstein-Acevedo Scales for Semantic Interference and Learning (LASSI-L) [Crocco et al, 2013], a novel memory-based cognitive stress test capitalizing on semantic interference, in Huntington’s Disease (HD).
Participants and Methods:
12 healthy adults (HA) and 14 individuals with manifest HD were administered the LASSI-L as part of an annual research visit with the UCSD Huntington’s Disease Clinical Research Center (HDCRC.) Participants in each group were well matched with regard to age and education. Individuals with manifest HD had an average MoCA score of 26, total functional capacity score of 10, and total motor score of 21 suggesting that they were in the early stages of HD. The LASSI-L examines different types of semantic interference that occur in the learning/encoding process. There are free and cued recall trials for two lists of semantically related words with certain trials specific to different aspects of semantic interference including proactive, retroactive, and failure to recover from proactive interference. T-tests for all recall trials and number of intrusions for each trial were conducted between HA and those with HD to examine whether HD renders one more prone to semantic interference in both encoding and retrieval memory processes.
Results:
Individuals with HD recalled fewer words on average than HA across all recall trials except for the initial free recall of the first word list. HD individuals recalled significantly fewer (∼1.5) words during the initial (t=-2.8, p=.005, Cohen’s d=2.7) and secondary (t=-2.9, p=.003, Cohen’s d=2.6) cued recall trials from the words on the first list. Individuals with HD also recalled significantly fewer words on initial free recall (t=-2.9, p=.003, Cohen’s d=2.6) and cued recall trials of the second list, with the initial cued recall (t=-2.8, p=.005, Cohen’s d=3.1) sensitive to proactive semantic interference and the second cued recall (t=-3.3, p=.001, Cohen’s d=2.6) sensitive to failure to recover from proactive semantic interference. In addition, individuals with HD also recalled significantly fewer (∼2.2) words on delayed cued recall of the first list, a measure of retroactive semantic interference, than HA (t=-4.8, p<.001, Cohen’s d=2.4). Lastly, individuals with HD recalled fewer (∼4.1) words than HA on delayed free recall of both word lists (t=-3.5, p<.001, Cohen’s d=5.9). The groups did not differ significantly with regard to number of total intrusions per trial.
Conclusions:
Overall, our study supports the usefulness of the LASSI-L for neuropsychological assessment of HD in clinical and research settings. In comparison to a demographically similar group of HA, individuals with manifest HD showed significant differences in frontally mediated retrieval processes as well as semantic interference processes that affect efficient encoding of novel information.
To assess the utility of the Mini Mental State Exam (MMSE) and Montreal Cognitive Assessment (MoCA) for tracking cognitive changes Huntington’s Disease.
Participants and Methods:
Currently, the most frequently used brief assessment of global cognitive functioning is the MMSE. Although the MMSE is helpful for distinguishing individuals without significant cognitive impairment from those with dementia, it is not particularly sensitive to more subtle cognitive deficits. The MoCA is another brief cognitive screening tool that has been shown to be more sensitive to mild impairment and may have greater usefulness in subcortical dementias because of its more extensive assessment of executive function. Although the MoCA appears to have high sensitivity and specificity in a variety of neurological populations, there is currently little known about its efficacy in tracking cognitive decline in individuals with HD. We used a mixed effects model to analyze MMSE and MoCA scores collected prospectively during 5 years of follow-up for 163 patients with HD seen at one academic HDSA Center of Excellence. Baseline mean age for the HD cohort was 51.35 years, mean education 14.46 years, and a mean CAG repeat length 43.95. Mean follow-up time was 3.33 years.
Results:
Mean MMSE and MoCA scores at baseline were 25.13 (SD=1.66) and 22.76 (SD=3.70) respectively. At baseline, age and gender were not associated with MMSE and MoCA scores, while years of education were. Neither age nor gender predicted rate of decline for the MoCA while years of education predicted rate of decline for the MMSE. For the MMSE, each year of education predicted on average 0.51 points higher score at enrollment; for the MoCA, each year of education predicted on average 0.79 points higher score at enrollment. The mean rates of decline on the MMSE was 0.48 points per year (p<.001) while that on the MoCA was only 0.31 points annually (p<.001) in the first five years of observation.
Conclusions:
The MMSE and MoCA decline significantly over time in an unselected HD population. The smaller rate of decline in the MoCA may be due, in part, to the greater variability in baseline, MoCA (SD=3.70) vs MMSE (SD=1.66) scores in our HD cohort. Unlike cortical dementias, such as Alzheimer’s disease (AD), where declines of 2-3 points per year have been described for the MMSE and MoCA, much lower annual rates of decline have been reported in subcortical dementias such as Parkinson’s disease. To our knowledge, this is the first report of rate of cognitive decline on the MMSE and MoCA in HD: such information is vital for adequately preparing patients and families for future needs, in addition to planning for interventional/treatment trials in HD.
The N100, an early auditory event-related potential, has been found to be altered in patients with psychosis. However, it is unclear if the N100 is a psychosis endophenotype that is also altered in the relatives of patients.
Methods
We conducted a family study using the auditory oddball paradigm to compare the N100 amplitude and latency across 243 patients with psychosis, 86 unaffected relatives, and 194 controls. We then conducted a systematic review and a random-effects meta-analysis pooling our results and 14 previously published family studies. We compared data from a total of 999 patients, 1192 relatives, and 1253 controls in order to investigate the evidence and degree of N100 differences.
Results
In our family study, patients showed reduced N100 amplitudes and prolonged N100 latencies compared to controls, but no significant differences were found between unaffected relatives and controls. The meta-analysis revealed a significant reduction of the N100 amplitude and delay of the N100 latency in both patients with psychosis (standardized mean difference [s.m.d.] = −0.48 for N100 amplitude and s.m.d. = 0.43 for N100 latency) and their relatives (s.m.d. = − 0.19 for N100 amplitude and s.m.d. = 0.33 for N100 latency). However, only the N100 latency changes in relatives remained significant when excluding studies with affected relatives.
Conclusions
N100 changes, especially prolonged N100 latencies, are present in both patients with psychosis and their relatives, making the N100 a promising endophenotype for psychosis. Such changes in the N100 may reflect changes in early auditory processing underlying the etiology of psychosis.
Clinical trial processes are unnecessarily inefficient and costly, slowing the translation of medical discoveries into treatments for people living with disease. To reduce redundancies and inefficiencies, a group of clinical trial experts developed a framework for clinical trial site readiness based on existing trial site qualifications from sponsors. The site readiness practices are encompassed within six domains: research team, infrastructure, study management, data collection and management, quality oversight, and ethics and safety. Implementation of this framework for clinical trial sites would reduce inefficiencies in trial conduct and help prepare new sites to enter the clinical trials enterprise, with the potential to improve the reach of clinical trials to underserved communities. Moreover, the framework holds benefits for trial sponsors, contract research organizations, trade associations, trial participants, and the public. For novice sites considering future trials, we provide a framework for site preparation and the engagement of stakeholders. For experienced sites, the framework can be used to assess current practices and inform and engage sponsors, staff, and participants. Details in the supplementary materials provide easy access to key regulatory documents and resources. Invited perspective articles provide greater depth from a systems, DEIA (diversity, equity, inclusion, and accessibility) and decentralized trials perspective.
To determine the association between after-hours consultations and the likelihood of antibiotic prescribing for self-limiting upper respiratory tract infections (URTIs) in primary care practices.
Design:
A cross-sectional analysis using Australian national primary-care practice data (MedicineInsight) between February 1, 2016 and January 31, 2019.
Setting:
Nationwide primary-care practices across Australia.
Participants:
Adult and pediatric patients who visited primary care practices for first-time URTIs.
Methods:
We estimated the proportion of first-time URTI episodes for which antibiotic prescribing occurred on the same day (immediate prescribing) using diagnoses and prescription records in the electronic primary-care database. Adjusted odds ratios (ORs) and 95% confidence intervals (CIs) for the likelihood of antibiotic prescribing by the time of primary care visits were calculated using generalized estimating equations.
Results:
Among 357,287 URTI episodes, antibiotics were prescribed in 172,605 episodes (48.3%). After adjusting for patients’ demographics, practice characteristics, and seasons, we detected a higher likelihood of antibiotic prescribing on weekends compared to weekdays (OR, 1.42; 95% CI, 1.39–1.45) and on national public holidays compared to nonholidays (OR, 1.23; 95% CI, 1.17–1.29). When we controlled for patient presentation and diagnosis, the association between antibiotic prescribing and after-hours consultations remained significant: weekend versus weekdays (OR, 1.37; 95% CI, 1.33–1.41) and holidays versus nonholidays (OR, 1.10; 95% CI, 1.03–1.18).
Conclusions:
Primary-care consultations on weekends and public holidays were associated with a higher likelihood of immediate antibiotic prescribing for self-limiting URTIs in primary care. This finding might be attributed to lower resourcing in after-hours health care.
Care in the community psychiatric setting involves regular monitoring of both mental and physical health. Patients with mental illness worldwide have higher rates of morbidity and earlier mortality, often due to physical disease, most commonly of metabolic or cardiovascular origin. The reasons for these findings are numerous, though a significant contributor is the underperformance of lifestyle screening and subsequent underutilisation of interventions. As standard, it is recommended that practitioners of all grades should, at each appropriate opportunity, assess their patient's current physical status and screen for lifestyle factors that increase risk of morbidity. These include: weekly physical activity, weight/BMI, diet, smoking status and alcohol intake. Our aim was to investigate if our Community Team was meeting both trust-set standards and national standards.
Methods
A list of all outpatient appointments, including all clinic types, and all grades of staff, was generated from 1/11/21 to 19/11/21 giving a total of 48 appointments. A list of questions were then answered using data taken from notes available on an electronic system. This allowed analysis of the frequency of assessment for each lifestyle factor and frequency of offered interventions, where appropriate. Further analysis across all grades of staff, both outpatient appointment clinics and medication monitoring clinics, and across specific mental health disorders was performed.
Results
Each lifestyle factor should have been checked at each appointment and interventions offered where appropriate. In each assessment an intervention could have been offered following identification of a modifiable factor. No factor was assessed at every opportunity. Only 2 interventions (4%) were offered. Targeted Medication Monitoring Clinics (MMC) did not perform better than Outpatient Follow-up Clinics (OPA), OPA offered more interventions. These findings were consistent across all grades of practitioner and diagnoses.
Conclusion
Assessment of modifiable risk factors was not performed at each assessment, and where interventions were appropriate, they were rarely offered. This was a universal issue across the team, and in spite of specialised clinics, or high risk disorders, there was substandard physical health management. Therefore, opportunities to modify risk of physical disease, or improve treatment of the underlying psychiatric disorder are being missed. This is troublesome as community psychiatry often has the space, time, and rapport with patients to explore these issues, furthermore, many psychiatric treatments carry the burden of increased risk of morbidity and mortality. Consequently, the onus should be upon us to manage these risks and improve patient health through simple, short interventions and timely signposting and referrals.
To examine differences in surgical practices between salaried and fee-for-service (FFS) surgeons for two common degenerative spine conditions. Surgeons may offer different treatments for similar conditions on the basis of their compensation mechanism.
Methods:
The study assessed the practices of 63 spine surgeons across eight Canadian provinces (39 FFS surgeons and 24 salaried) who performed surgery for two lumbar conditions: stable spinal stenosis and degenerative spondylolisthesis. The study included a multicenter, ambispective review of consecutive spine surgery patients enrolled in the Canadian Spine Outcomes and Research Network registry between October 2012 and July 2018. The primary outcome was the difference in type of procedures performed between the two groups. Secondary study variables included surgical characteristics, baseline patient factors, and patient-reported outcome.
Results:
For stable spinal stenosis (n = 2234), salaried surgeons performed statistically fewer uninstrumented fusion (p < 0.05) than FFS surgeons. For degenerative spondylolisthesis (n = 1292), salaried surgeons performed significantly more instrumentation plus interbody fusions (p < 0.05). There were no statistical differences in patient-reported outcomes between the two groups.
Conclusions:
Surgeon compensation was associated with different approaches to stable lumbar spinal stenosis and degenerative lumbar spondylolisthesis. Salaried surgeons chose a more conservative approach to spinal stenosis and a more aggressive approach to degenerative spondylolisthesis, which highlights that remuneration is likely a minor determinant in the differences in practice of spinal surgery in Canada. Further research is needed to further elucidate which variables, other than patient demographics and financial incentives, influence surgical decision-making.
The books examines the financial and business structures of the counterfeiting business and considers how the internet and e-commerce present financial opportunities for counterfeiters. It explores ‘organised crime’ and criminal markets, digital technologies and cultural values and practices.
A 13-month-old child presented from home, where he had begun choking and coughing. He had been eating pizza for dinner. When his mum turned around, she found he had opened her wallet that had been dropped on the floor. At home, he turned blue, went floppy and became unresponsive. His mum administered five back blows, which caused a cough and phlegm production, but nothing else.
Extensive research on gender and politics indicates that women legislators are more likely to serve on committees and sponsor bills related to so-called “women's issues.” However, it remains unclear whether this empirical regularity is driven by district preferences, differences in legislator backgrounds, or because gendered political processes shape and constrain the choices available to women once they are elected. We introduce expansive new data on over 25,000 US state legislators and an empirical strategy to causally isolate the different channels that might explain these gendered differences in legislator behavior. After accounting for district preferences with a difference-in-differences design and for candidate backgrounds via campaign fundraising data, we find that women are still more likely to serve on women's issues committees, although the gender gap in bill sponsorship decreases. These results shed new light on the mechanisms that lead men and women to focus on different policy areas as legislators.
A classic question about democratic elections is how much they are able to influence politician behavior by forcing them to anticipate future reelection attempts, especially in contexts where voters are not paying close attention and are not well informed. We compile a new dataset containing roughly 780,000 bills, combined with more than 16 million roll-call voting records for roughly 6,000 legislators serving in U.S. state legislatures with term limits. Using an individual-level difference-in-differences design, we find that legislators who can no longer seek reelection sponsor fewer bills, are less productive on committees, and are absent for more floor votes, on average. Building a new dataset of roll-call votes and interest-group ratings, we find little evidence that legislators who cannot run for reelection systematically shift their ideological platforms. In sum, elections appear to influence how legislators allocate their effort in important ways even in low-salience environments but may have less influence on ideological positioning.
The first demonstration of laser action in ruby was made in 1960 by T. H. Maiman of Hughes Research Laboratories, USA. Many laboratories worldwide began the search for lasers using different materials, operating at different wavelengths. In the UK, academia, industry and the central laboratories took up the challenge from the earliest days to develop these systems for a broad range of applications. This historical review looks at the contribution the UK has made to the advancement of the technology, the development of systems and components and their exploitation over the last 60 years.
We use nationwide deed-level records on home foreclosures to examine the effects of economic distress on electoral outcomes and individual voter turnout. County-level difference-in-differences estimates show that counties that suffered larger increases in foreclosures did not punish or reward members of the incumbent president's party more than less affected counties. Linking the Ohio voter file to individual foreclosures, difference-in-differences estimates show that individuals whose homes were foreclosed on were less likely to turn out, rather than being mobilized. However, in 2016 counties more exposed to foreclosures supported Trump at substantially higher rates. Taken together, the evidence suggests that the effect of local economic distress on incumbent performance is generally close to zero and only becomes substantial in unusual circumstances.