We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Accurate diagnosis of bipolar disorder (BPD) is difficult in clinical practice, with an average delay between symptom onset and diagnosis of about 7 years. A depressive episode often precedes the first manic episode, making it difficult to distinguish BPD from unipolar major depressive disorder (MDD).
Aims
We use genome-wide association analyses (GWAS) to identify differential genetic factors and to develop predictors based on polygenic risk scores (PRS) that may aid early differential diagnosis.
Method
Based on individual genotypes from case–control cohorts of BPD and MDD shared through the Psychiatric Genomics Consortium, we compile case–case–control cohorts, applying a careful quality control procedure. In a resulting cohort of 51 149 individuals (15 532 BPD patients, 12 920 MDD patients and 22 697 controls), we perform a variety of GWAS and PRS analyses.
Results
Although our GWAS is not well powered to identify genome-wide significant loci, we find significant chip heritability and demonstrate the ability of the resulting PRS to distinguish BPD from MDD, including BPD cases with depressive onset (BPD-D). We replicate our PRS findings in an independent Danish cohort (iPSYCH 2015, N = 25 966). We observe strong genetic correlation between our case–case GWAS and that of case–control BPD.
Conclusions
We find that MDD and BPD, including BPD-D are genetically distinct. Our findings support that controls, MDD and BPD patients primarily lie on a continuum of genetic risk. Future studies with larger and richer samples will likely yield a better understanding of these findings and enable the development of better genetic predictors distinguishing BPD and, importantly, BPD-D from MDD.
Information on the time spent completing cognitive testing is often collected, but such data are not typically considered when quantifying cognition in large-scale community-based surveys. We sought to evaluate the added value of timing data over and above traditional cognitive scores for the measurement of cognition in older adults.
Method:
We used data from the Longitudinal Aging Study in India-Diagnostic Assessment of Dementia (LASI-DAD) study (N = 4,091), to assess the added value of timing data over and above traditional cognitive scores, using item-specific regression models for 36 cognitive test items. Models were adjusted for age, gender, interviewer, and item score.
Results:
Compared to Quintile 3 (median time), taking longer to complete specific items was associated (p < 0.05) with lower cognitive performance for 67% (Quintile 5) and 28% (Quintile 4) of items. Responding quickly (Quintile 1) was associated with higher cognitive performance for 25% of simpler items (e.g., orientation for year), but with lower cognitive functioning for 63% of items requiring higher-order processing (e.g., digit span test). Results were consistent in a range of different analyses adjusting for factors including education, hearing impairment, and language of administration and in models using splines rather than quintiles.
Conclusions:
Response times from cognitive testing may contain important information on cognition not captured in traditional scoring. Incorporation of this information has the potential to improve existing estimates of cognitive functioning.
Several methods used to examine differential item functioning (DIF) in Patient-Reported Outcomes Measurement Information System (PROMIS®) measures are presented, including effect size estimation. A summary of factors that may affect DIF detection and challenges encountered in PROMIS DIF analyses, e.g., anchor item selection, is provided. An issue in PROMIS was the potential for inadequately modeled multidimensionality to result in false DIF detection. Section 1 is a presentation of the unidimensional models used by most PROMIS investigators for DIF detection, as well as their multidimensional expansions. Section 2 is an illustration that builds on previous unidimensional analyses of depression and anxiety short-forms to examine DIF detection using a multidimensional item response theory (MIRT) model. The Item Response Theory-Log-likelihood Ratio Test (IRT-LRT) method was used for a real data illustration with gender as the grouping variable. The IRT-LRT DIF detection method is a flexible approach to handle group differences in trait distributions, known as impact in the DIF literature, and was studied with both real data and in simulations to compare the performance of the IRT-LRT method within the unidimensional IRT (UIRT) and MIRT contexts. Additionally, different effect size measures were compared for the data presented in Section 2. A finding from the real data illustration was that using the IRT-LRT method within a MIRT context resulted in more flagged items as compared to using the IRT-LRT method within a UIRT context. The simulations provided some evidence that while unidimensional and multidimensional approaches were similar in terms of Type I error rates, power for DIF detection was greater for the multidimensional approach. Effect size measures presented in Section 1 and applied in Section 2 varied in terms of estimation methods, choice of density function, methods of equating, and anchor item selection. Despite these differences, there was considerable consistency in results, especially for the items showing the largest values. Future work is needed to examine DIF detection in the context of polytomous, multidimensional data. PROMIS standards included incorporation of effect size measures in determining salient DIF. Integrated methods for examining effect size measures in the context of IRT-based DIF detection procedures are still in early stages of development.
Among inpatients, peer-comparison of prescribing metrics is challenging due to variation in patient-mix and prescribing by multiple providers daily. We established risk-adjusted provider-specific antibiotic prescribing metrics to allow peer-comparisons among hospitalists.
Methods:
Using clinical and billing data from inpatient encounters discharged from the Hospital Medicine Service between January 2020 through June 2021 at four acute care hospitals, we calculated bimonthly (every two months) days of therapy (DOT) for antibiotics attributed to specific providers based on patient billing dates. Ten patient-mix characteristics, including demographics, infectious disease diagnoses, and noninfectious comorbidities were considered as potential predictors of antibiotic prescribing. Using linear mixed models, we identified risk-adjusted models predicting the prescribing of three antibiotic groups: broad spectrum hospital-onset (BSHO), broad-spectrum community-acquired (BSCA), and anti-methicillin-resistant Staphylococcus aureus (Anti-MRSA) antibiotics. Provider-specific observed-to-expected ratios (OERs) were calculated to describe provider-level antibiotic prescribing trends over time.
Results:
Predictors of antibiotic prescribing varied for the three antibiotic groups across the four hospitals, commonly selected predictors included sepsis, COVID-19, pneumonia, urinary tract infection, malignancy, and age >65 years. OERs varied within each hospital, with medians of approximately 1 and a 75th percentile of approximately 1.25. The median OER demonstrated a downward trend for the Anti-MRSA group at two hospitals but remained relatively stable elsewhere. Instances of heightened antibiotic prescribing (OER >1.25) were identified in approximately 25% of the observed time-points across all four hospitals.
Conclusion:
Our findings indicate provider-specific benchmarking among inpatient providers is achievable and has potential utility as a valuable tool for inpatient stewardship efforts.
The psychometric rigor of unsupervised, smartphone-based assessments and factors that impact remote protocol engagement is critical to evaluate prior to the use of such methods in clinical contexts. We evaluated the validity of a high-frequency, smartphone-based cognitive assessment protocol, including examining convergence and divergence with standard cognitive tests, and investigating factors that may impact adherence and performance (i.e., time of day and anticipated receipt of feedback vs. no feedback).
Methods:
Cognitively unimpaired participants (N = 120, Mage = 68.8, 68.3% female, 87% White, Meducation = 16.5 years) completed 8 consecutive days of the Mobile Monitoring of Cognitive Change (M2C2), a mobile app-based testing platform, with brief morning, afternoon, and evening sessions. Tasks included measures of working memory, processing speed, and episodic memory. Traditional neuropsychological assessments included measures from the Preclinical Alzheimer’s Cognitive Composite battery.
Results:
Findings showed overall high compliance (89.3%) across M2C2 sessions. Average compliance by time of day ranged from 90.2% for morning sessions, to 77.9% for afternoon sessions, and 84.4% for evening sessions. There was evidence of faster reaction time and among participants who expected to receive performance feedback. We observed excellent convergent and divergent validity in our comparison of M2C2 tasks and traditional neuropsychological assessments.
Conclusions:
This study supports the validity and reliability of self-administered, high-frequency cognitive assessment via smartphones in older adults. Insights into factors affecting adherence, performance, and protocol implementation are discussed.
Early life stress (ELS) and a Western diet (WD) promote mood and cardiovascular disorders, however, how these risks interact in disease pathogenesis is unclear. We assessed effects of ELS with or without a subsequent WD on behaviour, cardiometabolic risk factors, and cardiac function/ischaemic tolerance in male mice. Fifty-six new-born male C57BL/6J mice were randomly allocated to a control group (CON) undisturbed before weaning, or to maternal separation (3h/day) and early (postnatal day 17) weaning (MSEW). Mice consumed standard rodent chow (CON, n = 14; MSEW, n = 15) or WD chow (WD, n = 19; MSEW + WD, n = 19) from week 8 to 24. Fasted blood was sampled and open field test and elevated plus maze (EPM) tests undertaken at 7, 15, and 23 weeks of age, with hearts excised at 24 weeks for Langendorff perfusion (evaluating pre- and post-ischaemic function). MSEW alone transiently increased open field activity at 7 weeks; body weight and serum triglycerides at 4 and 7 weeks, respectively; and final blood glucose levels and insulin resistance at 23 weeks. WD increased insulin resistance and body weight gain, the latter potentiated by MSEW. MSEW + WD was anxiogenic, reducing EPM open arm activity vs. WD alone. Although MSEW had modest metabolic effects and did not influence cardiac function or ischaemic tolerance in lean mice, it exacerbated weight gain and anxiogenesis, and improved ischaemic tolerance in WD fed animals. MSEW-induced increases in body weight (obesity) in WD fed animals in the absence of changes in insulin resistance may have protected the hearts of these mice.
The COVID-19 has had major direct (e.g., deaths) and indirect (e.g., social inequities) effects in the United States. While the public health response to the epidemic featured some important successes (e.g., universal masking ,and rapid development and approval of vaccines and therapeutics), there were systemic failures (e.g., inadequate public health infrastructure) that overshadowed these successes. Key deficiency in the U.S. response were shortages of personal protective equipment (PPE) and supply chain deficiencies. Recommendations are provided for mitigating supply shortages and supply chain failures in healthcare settings in future pandemics. Some key recommendations for preventing shortages of essential components of infection control and prevention include increasing the stockpile of PPE in the U.S. National Strategic Stockpile, increased transparency of the Stockpile, invoking the Defense Production Act at an early stage, and rapid review and authorization by FDA/EPA/OSHA of non-U.S. approved products. Recommendations are also provided for mitigating shortages of diagnostic testing, medications and medical equipment.
Throughout the COVID-19 pandemic, many areas in the United States experienced healthcare personnel (HCP) shortages tied to a variety of factors. Infection prevention programs, in particular, faced increasing workload demands with little opportunity to delegate tasks to others without specific infectious diseases or infection control expertise. Shortages of clinicians providing inpatient care to critically ill patients during the early phase of the pandemic were multifactorial, largely attributed to increasing demands on hospitals to provide care to patients hospitalized with COVID-19 and furloughs.1 HCP shortages and challenges during later surges, including the Omicron variant-associated surges, were largely attributed to HCP infections and associated work restrictions during isolation periods and the need to care for family members, particularly children, with COVID-19. Additionally, the detrimental physical and mental health impact of COVID-19 on HCP has led to attrition, which further exacerbates shortages.2 Demands increased in post-acute and long-term care (PALTC) settings, which already faced critical staffing challenges difficulty with recruitment, and high rates of turnover. Although individual healthcare organizations and state and federal governments have taken actions to mitigate recurring shortages, additional work and innovation are needed to develop longer-term solutions to improve healthcare workforce resiliency. The critical role of those with specialized training in infection prevention, including healthcare epidemiologists, was well-demonstrated in pandemic preparedness and response. The COVID-19 pandemic underscored the need to support growth in these fields.3 This commentary outlines the need to develop the US healthcare workforce in preparation for future pandemics.
Throughout history, pandemics and their aftereffects have spurred society to make substantial improvements in healthcare. After the Black Death in 14th century Europe, changes were made to elevate standards of care and nutrition that resulted in improved life expectancy.1 The 1918 influenza pandemic spurred a movement that emphasized public health surveillance and detection of future outbreaks and eventually led to the creation of the World Health Organization Global Influenza Surveillance Network.2 In the present, the COVID-19 pandemic exposed many of the pre-existing problems within the US healthcare system, which included (1) a lack of capacity to manage a large influx of contagious patients while simultaneously maintaining routine and emergency care to non-COVID patients; (2) a “just in time” supply network that led to shortages and competition among hospitals, nursing homes, and other care sites for essential supplies; and (3) longstanding inequities in the distribution of healthcare and the healthcare workforce. The decades-long shift from domestic manufacturing to a reliance on global supply chains has compounded ongoing gaps in preparedness for supplies such as personal protective equipment and ventilators. Inequities in racial and socioeconomic outcomes highlighted during the pandemic have accelerated the call to focus on diversity, equity, and inclusion (DEI) within our communities. The pandemic accelerated cooperation between government entities and the healthcare system, resulting in swift implementation of mitigation measures, new therapies and vaccinations at unprecedented speeds, despite our fragmented healthcare delivery system and political divisions. Still, widespread misinformation or disinformation and political divisions contributed to eroded trust in the public health system and prevented an even uptake of mitigation measures, vaccines and therapeutics, impeding our ability to contain the spread of the virus in this country.3 Ultimately, the lessons of COVID-19 illustrate the need to better prepare for the next pandemic. Rising microbial resistance, emerging and re-emerging pathogens, increased globalization, an aging population, and climate change are all factors that increase the likelihood of another pandemic.4
In parts of southern and western Asia, as elsewhere, the cannon once served as one of the most dramatic tools in the inventories of state executioners. The practice of ‘blowing from a gun’, by which the condemned was bound to the front of a cannon and quite literally blown to pieces, was most infamously employed in British India and the Princely States, and the vast majority of English-language scholarship focuses on these regions. However, blowing from guns was commonplace in several other contemporary states, and the British use of the practice has rarely been situated in this context. The tactic was considered especially useful in Persia and Afghanistan, where weak governance, rebellion, and rampant banditry all threatened the legitimacy of the nascent state in the nineteenth and early twentieth centuries. This article presents a history of the practice of execution by cannon in southern and western Asia, positioning it within the existing literature on public executions in the context of military and civilian justice. In doing so, the article seeks to situate the British use of the tactic within a broader regional practice, arguing that, whilst the British—following the Mughal tradition—used execution by cannon primarily in maintaining military discipline, states such as Persia and Afghanistan instead employed the practice largely in the civilian context. This article also provides a brief technical review of the practice, drawing upon numerous primary sources to examine execution by cannon within the Mughal empire, British India, Persia, and Afghanistan.
Disparities in CHD outcomes exist across the lifespan. However, less is known about disparities for patients with CHD admitted to neonatal ICU. We sought to identify sociodemographic disparities in neonatal ICU admissions among neonates born with cyanotic CHD.
Materials & Methods:
Annual natality files from the US National Center for Health Statistics for years 2009–2018 were obtained. For each neonate, we identified sex, birthweight, pre-term birth, presence of cyanotic CHD, and neonatal ICU admission at time of birth, as well as maternal age, race, ethnicity, comorbidities/risk factors, trimester at start of prenatal care, educational attainment, and two measures of socio-economic status (Special Supplemental Nutrition Program for Women, Infants, and Children [WIC] status and insurance type). Multivariable logistic regression models were fit to determine the association of maternal socio-economic status with neonatal ICU admission. A covariate for race/ethnicity was then added to each model to determine if race/ethnicity attenuate the relationship between socio-economic status and neonatal ICU admission.
Results:
Of 22,373 neonates born with cyanotic CHD, 77.2% had a neonatal ICU admission. Receipt of WIC benefits was associated with higher odds of neonatal ICU admission (adjusted odds ratio [aOR] 1.20, 95% CI 1.1–1.29, p < 0.01). Neonates born to non-Hispanic Black mothers had increased odds of neonatal ICU admission (aOR 1.20, 95% CI 1.07–1.35, p < 0.01), whereas neonates born to Hispanic mothers were at lower odds of neonatal ICU admission (aOR 0.84, 95% CI 0.76–0.93, p < 0.01).
Conclusion:
Maternal Black race and low socio-economic status are associated with increased risk of neonatal ICU admission for neonates born with cyanotic CHD. Further work is needed to identify the underlying causes of these disparities.
OBJECTIVES/GOALS: Negative emotions (NE) play a pivotal role in addiction-related processes, including tobacco lapse during a quit attempt. Some NEs (e.g., shame, guilt) are posited to lead to a spiraling effect, whereby lapse predicts increased NEs leading to further lapse. This study goal is to examine associations between NEs and lapse. METHODS/STUDY POPULATION: This study examined associations between tobacco lapse and 13 distinct NEs among people who use tobacco and are trying to quit in two tobacco cessation studies. In Study 1, 220 adult (ages 18-74) cigarette users who identified as Black (50% female) participated in a 14-day study where ecological momentary assessment (with assessments approximately every 4 hours) was used to assess emotions and lapse in real-time and real-world settings. In Study 2, 288 adult (ages 18-71) cigarette users who were low socioeconomic status (51% White, 14% Black, 10% Hispanic, 49% female) participated in a 14-day study with the same study protocol as Study 1. Between and lagged within-person associations testing links between distinct NEs and lapse were examined with multilevel modeling with logistic links for binary outcomes. RESULTS/ANTICIPATED RESULTS: Results from Study 1 suggested that at the between-person level, disgust (OR =1.22, CI: 1.05, 1.42), nervousness (OR=1.23, CI:1.05,1.43), guilt (OR=1.40, CI: 1.16,1.69), and sadness (OR=1.18, CI:1.02,1.36) were predictive of higher odds of lapse, and at the within-person level, shame (OR=1.23, CI:1.04,1.45) was associated with higher odds of lapse. Results from Study 2 were similar and suggested that at the between-person level, disgust (OR=1.35, CI: 1.16, 1.56) and guilt (OR=1.88, CI:1.07,3.30), and at the within-person level, shame (OR =1.31, CI:1.10,1.55), were associated with higher odds of lapse. DISCUSSION/SIGNIFICANCE: The present study uses real-time, real-world data to demonstrate the role of distinct NEs on momentary tobacco lapse and helps elucidate specific NE that hinder the ability to abstain from tobacco use during a quit attempt. Results suggest that disgust, guilt, and shame play consistent roles in predicting lapse among diverse samples of tobacco users.
Mixed-layer clays composed of randomly interstratified kerolite/stevensite occur as lake and/or spring deposits of probable Pliocene and Pleistocene age in the Amargosa Desert of southern Nevada, U.S.A. The percentage of expandable layers of these clays, determined from computer-simulated X-ray diffractograms, ranges from almost 0 to about 80%. This range in expandabilities most likely results from differences in solution chemistry and/or temperature at the time of formation. An average structural formula for the purest clay (sample P-7), a clay with about 70% expandable layers, is:
Two rapid methods for the decomposition and chemical analysis of clays were adapted for use with 20–40-mg size samples, typical amounts of ultrafine products (<0.5-μm diameter) obtained by modern separation methods for clay minerals. The results of these methods were compared with those of “classical” rock analyses. The two methods consisted of mixed lithium metaborate fusion and heated decomposition with HF in a closed vessel. The latter technique was modified to include subsequent evaporation with concentrated H2SO4 and re-solution in HCl, which reduced the interference of the fluoride ion in the determination of Al, Fe, Ca, Mg, Na, and K. Results from the two methods agree sufficiently well with those of the “classical” techniques to minimize error in the calculation of clay mineral structural formulae. Representative maximum variations, in atoms per unit formula of the smectite type based on 22 negative charges, are 0.09 for Si, 0.03 for Al, 0.015 for Fe, 0.07 for Mg, 0.03 for Na, and 0.01 for K.
Deposits of sepiolite, trioctahedral smectite (mixed-layer kerolite/stevensite), calcite, and dolomite, found in the Amargosa Flat and Ash Meadows areas of the Amargosa Desert were formed by precipitation from nonsaline solutions. This mode of origin is indicated by crystal growth patterns, by the low Al content for the deposits, and by the absence of volcanoclastic textures. Evidence for low salinity is found in the isotopic compositions for the minerals, in the lack of abundant soluble salts in the deposits, and in the crystal habits of the dolomite. In addition, calculations show that modern spring water in the area can precipitate sepiolite, dolomite, and calcite following only minor evaporative concentration and equilibration with atmospheric CO2. However, precipitation of mixed-layer kerolite/stevensite may require a more saline environment. Mineral precipitation probably occurred during a pluvial period in shallow lakes or swamps fed by spring water from Paleozoic carbonate aquifers.
To evaluate temporal trends in the prevalence of gram-negative bacteria (GNB) with difficult-to-treat resistance (DTR) in the southeastern United States. Secondary objective was to examine the use of novel β-lactams for GNB with DTR by both antimicrobial use (AU) and a novel metric of adjusted AU by microbiological burden (am-AU).
Design:
Retrospective, multicenter, cohort.
Setting:
Ten hospitals in the southeastern United States.
Methods:
GNB with DTR including Enterobacterales, Pseudomonas aeruginosa, and Acinetobacter spp. from 2015 to 2020 were tracked at each institution. Cumulative AU of novel β-lactams including ceftolozane/tazobactam, ceftazidime/avibactam, meropenem/vaborbactam, imipenem/cilastatin/relebactam, and cefiderocol in days of therapy (DOT) per 1,000 patient-days was calculated. Linear regression was utilized to examine temporal trends in the prevalence of GNB with DTR and cumulative AU of novel β-lactams.
Results:
The overall prevalence of GNB with DTR was 0.85% (1,223/143,638) with numerical increase from 0.77% to 1.00% between 2015 and 2020 (P = .06). There was a statistically significant increase in DTR Enterobacterales (0.11% to 0.28%, P = .023) and DTR Acinetobacter spp. (4.2% to 18.8%, P = .002). Cumulative AU of novel β-lactams was 1.91 ± 1.95 DOT per 1,000 patient-days. When comparing cumulative mean AU and am-AU, there was an increase from 1.91 to 2.36 DOT/1,000 patient-days, with more than half of the hospitals shifting in ranking after adjustment for microbiological burden.
Conclusions:
The overall prevalence of GNB with DTR and the use of novel β-lactams remain low. However, the uptrend in the use of novel β-lactams after adjusting for microbiological burden suggests a higher utilization relative to the prevalence of GNB with DTR.
Running off the £2 trillion of UK corporate sector defined benefit liabilities in an efficient and effective fashion is the biggest challenge facing the UK pensions industry. As more and more defined benefit pension schemes start maturing, the trustees running those schemes need to consider what their target end-state will be and the associated journey plan. However, too few trustee boards have well-articulated and robust plans. Determining the target end-state requires a grasp of various disciplines and an ability to work collaboratively with different professional advisers. This paper sets out issues trustees, employers and their advisers can consider when addressing whether their target end state should be low- dependency, buyout or transfer to a superfund. Member outcomes analysis is introduced as a central tool through which to differentiate alternative target end-states. A five-step methodology is set out for deriving an optimal target end-state for a scheme. Also considered are the specific factors impacting stressed schemes, which highlights the importance to trustee boards when considering their Plan B should their employer or scheme ever become stressed. The paper ends with specific recommendations for the actuarial profession and The Pensions Regulator to take forward.
There are numerous challenges pertaining to epilepsy care across Ontario, including Epilepsy Monitoring Unit (EMU) bed pressures, surgical access and community supports. We sampled the current clinical, community and operational state of Ontario epilepsy centres and community epilepsy agencies post COVID-19 pandemic. A 44-item survey was distributed to all 11 district and regional adult and paediatric Ontario epilepsy centres. Qualitative responses were collected from community epilepsy agencies. Results revealed ongoing gaps in epilepsy care across Ontario, with EMU bed pressures and labour shortages being limiting factors. A clinical network advising the Ontario Ministry of Health will improve access to epilepsy care.