We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter of the handbook suggests some lessons from moral psychology for ethics and metaethics. The authors note that empirical research on a wide range of topics, including moral character, happiness and well-being, free will and moral responsibility, and moral judgment, has had a profound influence on recent philosophical theorizing about the foundations of morality. In their chapter they focus on one issue of particular importance: the reliability and trustworthiness of moral judgment. They critically assess three lines of argument that threaten to undermine epistemic confidence in our moral judgments, namely process debunking arguments, arguments from disagreement, and arguments from irrelevant influences. Though the jury is still out on how successful these arguments are, there is little question that they have potentially profound implications both for moral epistemology and philosophical methodology. Perhaps the most important lesson for ethics and metaethics to be drawn from moral psychology, then, may be that future progress in moral philosophy is likely to depend on philosophers and psychologists working together, rather than in isolation from one another.
The Personalized Advantage Index (PAI) shows promise as a method for identifying the most effective treatment for individual patients. Previous studies have demonstrated its utility in retrospective evaluations across various settings. In this study, we explored the effect of different methodological choices in predictive modelling underlying the PAI.
Methods
Our approach involved a two-step procedure. First, we conducted a review of prior studies utilizing the PAI, evaluating each study using the Prediction model study Risk Of Bias Assessment Tool (PROBAST). We specifically assessed whether the studies adhered to two standards of predictive modeling: refraining from using leave-one-out cross-validation (LOO CV) and preventing data leakage. Second, we examined the impact of deviating from these methodological standards in real data. We employed both a traditional approach violating these standards and an advanced approach implementing them in two large-scale datasets, PANIC-net (n = 261) and Protect-AD (n = 614).
Results
The PROBAST-rating revealed a substantial risk of bias across studies, primarily due to inappropriate methodological choices. Most studies did not adhere to the examined prediction modeling standards, employing LOO CV and allowing data leakage. The comparison between the traditional and advanced approach revealed that ignoring these standards could systematically overestimate the utility of the PAI.
Conclusion
Our study cautions that violating standards in predictive modeling may strongly influence the evaluation of the PAI's utility, possibly leading to false positive results. To support an unbiased evaluation, crucial for potential clinical application, we provide a low-bias, openly accessible, and meticulously annotated script implementing the PAI.
Observations of glacier melt and runoff are of fundamental interest in the study of glaciers and their interactions with their environment. Considerable recent interest has developed around distributed acoustic sensing (DAS), a sensing technique which utilizes Rayleigh backscatter in fiber optic cables to measure the seismo-acoustic wavefield in high spatial and temporal resolution. Here, we present data from a month-long, 9 km DAS deployment extending through the ablation and accumulation zones on Rhonegletscher, Switzerland, during the 2020 melt season. While testing several types of machine learning (ML) models, we establish a regression problem, using the DAS data as the dependent variable, to infer the glacier discharge observed at a proglacial stream gauge. We also compare two predictive models that only depend on meteorological station data. We find that the seismo-acoustic wavefield recorded by DAS can be utilized to infer proglacial discharge. Models using DAS data outperform the two models trained on meteorological data with mean absolute errors of 0.64, 2.25 and 2.72 m3 s−1, respectively. This study demonstrates the ability of in situ glacier DAS to be used for quantifying proglacial discharge and points the way to a new approach to measuring glacier runoff.
Emerging multidrug-resistant organisms (MDROs), such as carbapenem-resistant Enterobacterales (CRE), can spread rapidly in a region. Facilities that care for high-acuity patients with longer stays may have a disproportionate impact on this spread.
Objective:
We assessed the impact of implementing preventive interventions, directed at a subset of facilities, on regional prevalence.
Methods:
We developed a deterministic compartmental model, parametrized using CRE and patient transfer data. The model included the community and healthcare facilities within a US state. Individuals may be either susceptible or infectious with CRE. Individuals determined to be infectious through admission screening, periodic prevalence surveys (PPSs), or interfacility communication were placed in a state of lower transmissibility if enhanced infection prevention and control (IPC) practices were in place at a facility.
Results:
Intervention bundles that included PPS and enhanced IPC practices at ventilator-capable skilled nursing facilities (vSNFs) and long-term acute-care hospitals (LTACHs) had the greatest impact on regional prevalence. The benefits of including targeted admission screening in acute-care hospitals, LTACHs, and vSNFs, and improved interfacility communication were more modest. Daily transmissions in each facility type were reduced following the implementation of interventions primarily focused at LTACHs and vSNFs.
Conclusions:
Our model suggests that interventions that include screening to limit unrecognized MDRO introduction to, or dispersal from, LTACHs and vSNFs slow regional spread. Interventions that pair detection and enhanced IPC practices within LTACHs and vSNFs may substantially reduce the regional burden.
Social connection is associated with better health, including reduced risk of dementia. Personality traits are also linked to cognitive outcomes; neuroticism is associated with increased risk of dementia. Personality traits and social connection are also associated with each other. Taken together, evidence suggests the potential impacts of neuroticism and social connection on cognitive outcomes may be linked. However, very few studies have simultaneously examined the relationships between personality, social connection and health.
Research objective:
We tested the association between neuroticism and cognitive measures while exploring the potential mediating roles of aspects of social connection (loneliness and social isolation).
Method:
We conducted a cross-sectional study with a secondary analysis of the Canadian Longitudinal Study on Aging (CLSA) Comprehensive Cohort, a sample of Canadians aged 45 to 85 years at baseline. We used only self-reported data collected at the first follow-up, between 2015 and 2018 (n= 27,765). We used structural equation modelling to assess the association between neuroticism (exposure) and six cognitive measures (Rey Auditory Verbal Learning Test immediate recall and delayed recall, Animal Fluency Test, Mental Alternation Test, Controlled Oral Word Association Test and Stroop Test interference ratio), with direct and indirect effects (through social isolation and loneliness). We included age, education and hearing in the models and stratified all analyses by sex, females (n= 14,133) and males (n=13,632).
Preliminary results of the ongoing study:
We found positive, statistically significant associations between neuroticism and social isolation (p<0.05) and loneliness (p<0.05), for both males and females. We also found inverse, statistically significant associations between neuroticism and all cognitive measures (p<0.05), except the Stroop Test interference ratio. In these models, there was consistent evidence of indirect effects (through social isolation and loneliness) and, in some cases, evidence of direct effects. We found sex differences in the model results.
Conclusion:
Our findings suggest that the association between neuroticism and cognitive outcomes may be mediated by aspects of social connection and differ by sex. Understanding if and how modifiable risk factors mediate the association between personality and cognitive outcomes would help develop and target intervention strategies that improve social connection and brain health.
Obsessive-compulsive disorder (OCD) is a common and debilitating disorder that frequently begins in childhood and adolescence. Previous work (Bolton et al., 2011) has demonstrated that brief CBT (5 sessions), supplemented by therapeutic workbooks, is as effective as more traditional length (12 sessions) therapist-delivered treatment for adolescents with OCD. However, as was typical at the time, the treatment was developed with very limited patient and public involvement (PPI) and was delivered in the context of a randomised controlled trial which might affect translation to routine child and adolescent mental health services (CAMHS). To be able to implement such treatment within routine clinical services, it is crucial that it acceptable to young people, their families and the clinicians delivering the treatment. The aim of this project was to improve the acceptability of the brief treatment through PPI and consultation with clinicians, and consider issues relating to implementation. This was done through written feedback, interviews and focus groups with five adolescents and two parents, and a focus group and a half-day workshop with 12 clinicians. This led to revisions to the workbooks and materials to improve (a) acceptability by updating the design through changes to wording, language and images, and to ensure that they were consistent with values of equality, diversity and inclusion, and (b) usability by clarifying, adding, removing content, and organising the materials in new ways. We emphasise the importance of continued PPI throughout the project to maximise the translation of findings into practice.
Key learning aims
(1) To understand the issues surrounding the delivery of brief CBT to young people with OCD.
(2) To understand ways of reviewing, developing and improving the CBT materials with a range of young people, their parents, and clinicians.
(3) To understand how to consult with clinicians in relation to the implementation of the treatment.
(4) To consider how the process of this type of work can assist in the next steps of implementing a manualised intervention in routine CAMHS.
Physical pain is a common issue in people with bipolar disorder (BD). It worsens mental health and quality of life, negatively impacts treatment response, and increases the risk of suicide. Lithium, which is prescribed in BD as a mood stabilizer, has shown promising effects on pain.
Methods
This naturalistic study included 760 subjects with BD ( FACE-BD cohort) divided in two groups: with and without self-reported pain (evaluated with the EQ-5D-5L questionnaire). In this sample, 176 subjects were treated with lithium salts. The objectives of the study were to determine whether patients receiving lithium reported less pain, and whether this effect was associated with the recommended mood-stabilizing blood concentration of lithium.
Results
Subjects with lithium intake were less likely to report pain (odds ratio [OR] = 0.59, 95% confidence interval [CI], 0.35–0.95; p = 0.036) after controlling for sociodemographic variables, BD type, lifetime history of psychiatric disorders, suicide attempt, personality traits, current depression and anxiety levels, sleep quality, and psychomotor activity. Subjects taking lithium were even less likely to report pain when lithium concentration in blood was ≥0.5 mmol/l (OR = 0.45, 95% CI, 0.24–0.79; p = 0.008).
Conclusions
This is the first naturalistic study to show lithium’s promising effect on pain in subjects suffering from BD after controlling for many confounding variables. This analgesic effect seems independent of BD severity and comorbid conditions. Randomized controlled trials are needed to confirm the analgesic effect of lithium salts and to determine whether lithium decreases pain in other vulnerable populations.
Background: The CDC’s new Public Health Strategies to Prevent the Spread of Novel and Targeted Multidrug-Resistant Organisms (MDROs) were informed by mathematical models that assessed the impact of implementing preventive strategies directed at a subset of healthcare facilities characterized as influential or highly connected based on their predicted role in the regional spread of MDROs. We developed an interactive tool to communicate mathematical modeling results and visualize the regional patient transfer network for public health departments and healthcare facilities to assist in planning and implementing prevention strategies. Methods: An interactive RShiny application is currently hosted in the CDC network and is accessible to external partners through the Secure Access Management Services (SAMS). Patient transfer volumes (direct and indirect, that is, with up to 30 days in the community between admissions) were estimated from the CMS fee-for-service claims data from 2019. The spread of a carbapenem-resistant Enterobacterales (CRE)–like MDROs within a US state was simulated using a deterministic model with susceptible and infectious compartments in the community and healthcare facilities interconnected through patient transfers. Individuals determined to be infectious through admission screening, point-prevalence surveys (PPSs), or notified from interfacility communication were assigned lower transmissibility if enhanced infection prevention and control practices were in place at a facility. Results: The application consists of 4 interactive tabs. Users can visualize the statewide patient-sharing network for any US state and select territories in the first tab (Fig. 1). A feature allows users to highlight a facility of interest and display downstream or upstream facilities that received or sent transfers from the facility of interest, respectively. A second tab lists influential facilities to aid in prioritizing screening and prevention activities. A third tab lists all facilities in the state in descending order of their dispersal rate (ie, the rate at which patients are shared downstream to other facilities), which can help identify highly connected facilities. In the fourth tab, an interactive graph displays the predicted reduction of MDRO prevalence given a range of intervention scenarios (Fig. 2). Conclusions: Our RShiny application, which can be accessed by public health partners, can assist healthcare facilities and public health departments in planning and tailoring MDRO prevention activity bundles.
Physical activity (PA) is crucial in the treatment of cardiac disease. There is a high prevalence of stress-response and affective disorders among cardiac patients, which might be negatively associated with their PA. This study aimed at investigating daily differential associations of International Classification of Diseases (ICD)-11 adjustment disorder, depression and anxiety symptoms with PA and sedentary behaviour (SB) during and right after inpatient cardiac rehabilitation.
Methods
The sample included N = 129 inpatients in cardiac rehabilitation, Mage = 62.2, s.d.age = 11.3, 84.5% male, n = 2845 days. Adjustment disorder, depression and anxiety symptoms were measured daily during the last 7 days of rehabilitation and for 3 weeks after discharge. Moderate-to-vigorous PA (MVPA), light PA (LPA) and SB were measured with an accelerometer. Bayesian lagged multilevel regressions including all three symptoms to obtain their unique effects were conducted.
Results
On days with higher adjustment disorder symptoms than usual, patients engaged in less MVPA, and more SB. Patients with overall higher depression symptoms engaged in less MVPA, less LPA and more SB. On days with higher depression symptoms than usual, there was less MVPA and LPA, and more SB. Patients with higher anxiety symptoms engaged in more LPA and less SB.
Conclusions
Results highlight the necessity to screen for and treat adjustment disorder and depression symptoms during cardiac rehabilitation.
Background: Multidrug-resistant organisms (MDROs), such as carbapenem-resistant Enterobacterales (CRE), can spread rapidly in a region. Facilities that care for high-acuity patients with long average lengths of stay (eg, long-term acute-care hospitals or LTACHs and ventilator-capable skilled nursing facilities or vSNFs) may amplify this spread. We assessed the impact of interventions on CRE spread within a region individually, bundled, and implemented at different facility types. Methods: We developed a deterministic compartmental model, parametrized using CRE data reported to the NHSN and patient transfer data from the CMS specific to a US state. The model includes the community and the healthcare facilities within the state. Individuals may be either susceptible or infected and infectious. Infected patients determined to have CRE through admission screening or point-prevalence surveys at a facility are placed in a state of lower transmissibility if enhanced infection prevention and control (IPC) practices are in place. Results: Intervention bundles that included periodic point-prevalence surveys and enhanced IPC at high-acuity postacute-care facilities had the greatest impact on regional prevalence 10 years into an outbreak; the benefits of including admission screening and improved interfacility communication were more modest (Fig. 1A). Delaying interventions by 3 years is predicted to result in smaller reductions in prevalence (Fig. 1B). Increasing the frequency of point-prevalence surveys from biannually to quarterly resulted in a substantial relative reduction in prevalence (from 25% to 44%) if conducted from the start of an outbreak. IPC improvements in vSNFs resulted in greater relative reductions than in LTACHs. Admission screening at LTACHs and vSNFs was predicted to have a greater impact on prevalence if in place prior to CRE introduction (~20% reduction), and the impact decreased by approximately half if implementation was delayed until 3 years after CRE introduction. In contrast, the effect of admission screening in ACH was less (~10% reduction in prevalence) and did not change with implementation delays. Conclusions: Our model suggests that interventions that limit unrecognized MDRO introduction to, or dispersal from, LTACHs and vSNFs through screening are predicted to slow distribution regionally. Interventions to detect colonization and improve IPC practices within LTACHs and vSNFs may substantially reduce the regional burden. Prevention strategies are predicted to have the greatest impact when interventions are bundled and implemented before an MDRO is identified in a region, but reduction in overall prevalence is still possible if implemented after initial MDRO spread.
The opioid epidemic in the United States is getting worse: in 2020 opioid overdose deaths hit an all-time high of 92,183. This underscored the need for more effective and readily available treatments for patients with opioid use disorder (OUD). Prescription digital therapeutics (PDTs) are FDA-authorized treatments delivered via mobile devices (eg, smartphones). A real-world pilot study was conducted in an outpatient addiction treatment program to evaluate patient engagement and use of a PDT for patients with OUD. The objective was to assess the ability of the PDT to improve engagement and care for patients receiving buprenorphine medication for opioid use disorder (MOUD).
Methods
Patients with OUD treated at an ambulatory addiction treatment clinic were invited to participate in the pilot. The reSET-O PDT is comprised of 31 core therapy lessons plus 36 supplementary lessons, plus contingency management rewards. Patients were asked to complete at least 4 lessons per week, for 12-weeks. Engagement and use data were collected via the PDT and rates of emergency room data were obtained from patient medical records. Data were compared to a similar group of 158 OUD patients treated at the same clinic who did not use the PDT. Abstinence data were obtained from deidentified medical records.
Results
Pilot participants (N = 40) completed a median of 24 lessons: 73.2% completed at least 8 lessons and 42.5% completed all 31 core lessons. Pilot participants had significantly higher rates of abstinence from opioids in the 30 days prior to discharge from the program than the comparison group: 77.5% vs 51.9% (P < .01). Clinician-reported treatment retention for pilot participants vs the comparison group was 100% vs 70.9% 30 days after treatment initiation (P < .01), 87.5% vs 55.1% at 90 days post-initiation (P < .01), and 45.0% vs 38.6% at 180 days post-initiation (P = .46). Emergency room visits within 90 days of discharge from the addiction program were significantly reduced in pilot participants compared to the comparison group (17.3% vs 31.7%, P < .01).
Conclusions
These results demonstrate substantial engagement with a PDT in a real-world population of patients with OUD being treated with buprenorphine. Abstinence and retention outcomes were high compared to patients not using the PDT. These results demonstrate the potential value of PDTs to improve outcomes among patients with OUD, a population for which a significant need for improved treatments exists.
Funding
Trinity Health Innovation and Pear Therapeutics Inc.
To evaluate coronavirus disease 2019 (COVID-19) vaccine hesitancy among healthcare personnel (HCP) with significant clinical exposure to COVID-19 at 2 large, academic hospitals in Philadelphia, Pennsylvania.
Design, setting, and participants:
HCP were surveyed in November–December 2020 about their intention to receive the COVID-19 vaccine.
Methods:
The survey measured the intent among HCP to receive a COVID-19 vaccine, timing of vaccination, and reasons for or against vaccination. Among patient-facing HCP, multivariate regression evaluated the associations between healthcare positions (medical doctor, nurse practitioner or physician assistant, and registered nurse) and vaccine hesitancy (intending to decline, delay, or were unsure about vaccination), adjusting for demographic characteristics, reasons why or why not to receive the vaccine, and prior receipt of routine vaccines.
Results:
Among 5,929 HCP (2,253 medical doctors [MDs] and doctors of osteopathy [DOs], 582 nurse practitioners [NPs], 158 physician assistants [PAs], and 2,936 nurses), a higher proportion of nurses (47.3%) were COVID-vaccine hesitant compared with 30.0% of PAs and NPs and 13.1% of MDs and DOs. The most common reasons for vaccine hesitancy included concerns about side effects, the newness of the vaccines, and lack of vaccine knowledge. Regardless of position, Black HCP were more hesitant than White HCP (odds ratio [OR], ∼5) and females were more hesitant than males (OR, ∼2).
Conclusions:
Although most clinical HCP intended to receive a COVID-19 vaccine, intention varied by healthcare position. Consistent with other studies, hesitancy was also significantly associated with race or ethnicity across all positions. These results highlight the importance of understanding and effectively addressing reasons for hesitancy, especially among frontline HCP who are at increased risk of COVID exposure and play a critical role in recommending vaccines to patients.
Vision and hearing impairments are highly prevalent in adults 65 years of age and older. There is a need to understand their association with multiple health-related outcomes. We analyzed data from the Resident Assessment Instrument for Home Care (RAI-HC). Home care clients were followed for up to 5 years and categorized into seven unique cohorts based on whether or not they developed new vision and/or hearing impairments. An absolute standardized difference (stdiff) of at least 0.2 was considered statistically meaningful. Most clients (at least 60%) were female and 34.9 per cent developed a new sensory impairment. Those with a new concurrent vison and hearing impairment were more likely than those with no sensory impairments to experience a deterioration in receptive communication (stdiff = 0.68) and in cognitive performance (stdiff = 0.49). After multivariate adjustment, they had a twofold increased odds (adjusted odds ratio [OR] = 2.1; 95% confidence interval [CI]:1,87, 2.35) of deterioration in cognitive performance. Changes in sensory functioning are common and have important effects on multiple health-related outcomes.
Substantial progress has been made in the standardization of nomenclature for paediatric and congenital cardiac care. In 1936, Maude Abbott published her Atlas of Congenital Cardiac Disease, which was the first formal attempt to classify congenital heart disease. The International Paediatric and Congenital Cardiac Code (IPCCC) is now utilized worldwide and has most recently become the paediatric and congenital cardiac component of the Eleventh Revision of the International Classification of Diseases (ICD-11). The most recent publication of the IPCCC was in 2017. This manuscript provides an updated 2021 version of the IPCCC.
The International Society for Nomenclature of Paediatric and Congenital Heart Disease (ISNPCHD), in collaboration with the World Health Organization (WHO), developed the paediatric and congenital cardiac nomenclature that is now within the eleventh version of the International Classification of Diseases (ICD-11). This unification of IPCCC and ICD-11 is the IPCCC ICD-11 Nomenclature and is the first time that the clinical nomenclature for paediatric and congenital cardiac care and the administrative nomenclature for paediatric and congenital cardiac care are harmonized. The resultant congenital cardiac component of ICD-11 was increased from 29 congenital cardiac codes in ICD-9 and 73 congenital cardiac codes in ICD-10 to 318 codes submitted by ISNPCHD through 2018 for incorporation into ICD-11. After these 318 terms were incorporated into ICD-11 in 2018, the WHO ICD-11 team added an additional 49 terms, some of which are acceptable legacy terms from ICD-10, while others provide greater granularity than the ISNPCHD thought was originally acceptable. Thus, the total number of paediatric and congenital cardiac terms in ICD-11 is 367. In this manuscript, we describe and review the terminology, hierarchy, and definitions of the IPCCC ICD-11 Nomenclature. This article, therefore, presents a global system of nomenclature for paediatric and congenital cardiac care that unifies clinical and administrative nomenclature.
The members of ISNPCHD realize that the nomenclature published in this manuscript will continue to evolve. The version of the IPCCC that was published in 2017 has evolved and changed, and it is now replaced by this 2021 version. In the future, ISNPCHD will again publish updated versions of IPCCC, as IPCCC continues to evolve.
Rigorous scientific review of research protocols is critical to making funding decisions, and to the protection of both human and non-human research participants. Given the increasing complexity of research designs and data analysis methods, quantitative experts, such as biostatisticians, play an essential role in evaluating the rigor and reproducibility of proposed methods. However, there is a common misconception that a statistician’s input is relevant only to sample size/power and statistical analysis sections of a protocol. The comprehensive nature of a biostatistical review coupled with limited guidance on key components of protocol review motived this work. Members of the Biostatistics, Epidemiology, and Research Design Special Interest Group of the Association for Clinical and Translational Science used a consensus approach to identify the elements of research protocols that a biostatistician should consider in a review, and provide specific guidance on how each element should be reviewed. We present the resulting review framework as an educational tool and guideline for biostatisticians navigating review boards and panels. We briefly describe the approach to developing the framework, and we provide a comprehensive checklist and guidance on review of each protocol element. We posit that the biostatistical reviewer, through their breadth of engagement across multiple disciplines and experience with a range of research designs, can and should contribute significantly beyond review of the statistical analysis plan and sample size justification. Through careful scientific review, we hope to prevent excess resource expenditure and risk to humans and animals on poorly planned studies.
Regionalizing pre-colonial Africa aids in the collection and interpretation of primary sources as data for further analysis. This article includes a map with six broad regions and 34 sub-regions, which form a controlled vocabulary within which researchers may geographically organize and classify disparate pieces of information related to Africa’s past. In computational terms, the proposed African regions serve as data containers in order to consolidate, link, and disseminate research among a growing trend in digital humanities projects related to the history of the African diasporas before c. 1900. Our naming of regions aims to avoid terminologies derived from European slave traders, colonialism, and modern-day countries.
Background: Successful containment of regional outbreaks of emerging multidrug-resistant organisms (MDROs) relies on early outbreak detection. However, deploying regional containment is resource intensive; understanding the distribution of different types of outbreaks might aid in further classifying types of responses. Objective: We used a stochastic model of disease transmission in a region where healthcare facilities are linked by patient sharing to explore optimal strategies for early outbreak detection. Methods: We simulated the introduction and spread of Candida auris in a region using a lumped-parameter stochastic adaptation of a previously described deterministic model (Clin Infect Dis 2019 Mar 28. doi:10.1093/cid/ciz248). Stochasticity was incorporated to capture early-stage behavior of outbreaks with greater accuracy than was possible with a deterministic model. The model includes the real patient sharing network among healthcare facilities in an exemplary US state, using hospital claims data and the minimum data set from the CMS for 2015. Disease progression rates for C. auris were estimated from surveillance data and the literature. Each simulated outbreak was initiated with an importation to a Dartmouth Atlas of Health Care hospital referral region. To estimate the potential burden, we quantified the “facility-time” period during which infectious patients presented a risk of subsequent transmission within each healthcare facility. Results: Of the 28,000 simulated outbreaks initiated with an importation to the community, 2,534 resulted in patients entering the healthcare facility network. Among those, 2,480 (98%) initiated a short outbreak that died out or quickly attenuated within 2 years without additional intervention. In the simulations, if containment responses were initiated for each of those short outbreaks, facility time at risk decreased by only 3%. If containment responses were initiated for the 54 (2%) outbreaks lasting 2 years or longer, facility time at risk decreased by 79%. Sentinel surveillance through point-prevalence surveys (PPSs) at the 23 skilled-nursing facilities caring for ventilated patients (vSNF) in the network detected 50 (93%) of the 54 longer outbreaks (median, 235 days to detection). Quarterly PPSs at the 23 largest acute-care hospitals (ie, most discharges) detected 48 longer outbreaks (89%), but the time to detection was longer (median, 716 days to detection). Quarterly PPSs also identified 76 short-term outbreaks (in comparison to only 14 via vSNF PPS) that self-terminated without intervention. Conclusions: A vSNF-based sentinel surveillance system likely provides better information for guiding regional intervention for the containment of emerging MDROs than a similarly sized acute-care hospital–based system.
The purpose of this study was to describe the prevalence of hearing loss (HL), vision loss (VL), and dual sensory loss (DSL) in Canadians 45–85 years of age. Audiometry and visual acuity were measured. Various levels of impairment severity were described. Results were extrapolated to the 2016 Canadian population. In 2016, 1,500,000 Canadian males 45–85 years of age had at least mild HL, 1,800,000 had at least mild VL, and 570,000 had DSL. Among females, 1,200,000 had at least mild HL, 2,200,000 had at least mild VL, and 450,000 had DSL. Among Canadians 45–85 years of age, mild, moderate, and severe HL was prevalent among 13.4 per cent, 3.7 per cent, and 0.4 per cent of males, and among 11.3 per cent, 2.3 per cent, and 0.2 per cent of females, respectively. Mild and moderate, or severe VL was prevalent among 19.8 per cent and 2.4 per cent of males, and among 23.9 per cent and 2.6 per cent of females, respectively. At least mild DSL was prevalent among 6.4 per cent of males and 6.1 per cent of females.
Copy number variants (CNVs) play a significant role in disease pathogenesis in a small subset of individuals with schizophrenia (~2.5%). Chromosomal microarray testing is a first-tier genetic test for many neurodevelopmental disorders. Similar testing could be useful in schizophrenia.
Aims
To determine whether clinically identifiable phenotypic features could be used to successfully model schizophrenia-associated (SCZ-associated) CNV carrier status in a large schizophrenia cohort.
Method
Logistic regression and receiver operating characteristic (ROC) curves tested the accuracy of readily identifiable phenotypic features in modelling SCZ-associated CNV status in a discovery data-set of 1215 individuals with psychosis. A replication analysis was undertaken in a second psychosis data-set (n = 479).
Results
In the discovery cohort, specific learning disorder (OR = 8.12; 95% CI 1.16–34.88, P = 0.012), developmental delay (OR = 5.19; 95% CI 1.58–14.76, P = 0.003) and comorbid neurodevelopmental disorder (OR = 5.87; 95% CI 1.28–19.69, P = 0.009) were significant independent variables in modelling positive carrier status for a SCZ-associated CNV, with an area under the ROC (AUROC) of 74.2% (95% CI 61.9–86.4%). A model constructed from the discovery cohort including developmental delay and comorbid neurodevelopmental disorder variables resulted in an AUROC of 83% (95% CI 52.0–100.0%) for the replication cohort.
Conclusions
These findings suggest that careful clinical history taking to document specific neurodevelopmental features may be informative in screening for individuals with schizophrenia who are at higher risk of carrying known SCZ-associated CNVs. Identification of genomic disorders in these individuals is likely to have clinical benefits similar to those demonstrated for other neurodevelopmental disorders.