We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Hospital employees are at risk of severe acute respiratory coronavirus 2 (SARS-CoV-2) infection from patient, coworker, and community interactions. Understanding employees’ perspectives on transmission risks may inform hospital pandemic management strategies.
Design:
Qualitative interviews were conducted with 23 employees to assess factors contributing to perceived transmission risks during patient, coworker, and community interactions and to elicit recommendations. Using a deductive approach, transcripts were coded to identify recurring themes.
Setting:
Tertiary hospital in Boston, Massachusetts.
Participants:
Employees with a positive SARS-CoV-2 PCR test between March 2020 and January 2021, a period before widespread vaccine availability.
Results:
Employees generally reported low concern about transmission risks during patient care. Most patient-related risks, including limited inpatient testing and personal protective equipment availability, were only reported during the early weeks of the pandemic, except for suboptimal masking adherence by patients. Participants reported greater perceived transmission risks from coworkers, due to limited breakroom space, suboptimal coworker masking, and perceptions of inadequate contact tracing. Perceived community risks were related to social gatherings and to household members who also had high SARS-CoV-2 infection risk because they were essential workers. Recommendations included increasing well-ventilated workspaces and breakrooms, increasing support for sick employees, and stronger hospital communication about risks from non-patient-care activities, including the importance of masking adherence with coworkers and in the community.
Conclusions:
To reduce transmission during future pandemics, hospitals may consider improving communication on risk reduction during coworker and community interactions. Societal investments are needed to improve hospital infrastructure (eg, better ventilation and breakroom space) and increase support for sick employees.
This paper empirically compares the use of straightforward verses more complex methods to estimate public goods game data. Five different estimation methods were compared holding the dependent and explanatory variables constant. The models were evaluated using a large out-of-sample cross-country public goods game data set. The ordered probit and tobit random-effects models yielded lower p values compared to more straightforward models: ordinary least squares, fixed and random effects. However, the more complex models also had a greater predictive bias. The straightforward models performed better than expected. Despite their limitations, they produced unbiased predictions for both the in-sample and out-of-sample data.
Disasters exacerbate inequities in health care. Health systems use the Hospital Incident Command System (HICS) to plan and coordinate their disaster response. This study examines how 2 health systems prioritized equity in implementing the Hospital Incident Command System (HICS) during the coronavirus disease 2019 (COVID-19) pandemic and identifies factors that influenced implementation.
Methods:
This is a qualitative case comparison study, involving semi-structured interviews with 29 individuals from 2 US academic health systems. Strategies for promoting health equity were categorized by social determinants of health. The Consolidated Framework for Implementation Research (CFIR) guided analysis using a hybrid inductive-deductive approach.
Results:
The health systems used various strategies to incorporate health equity throughout implementation, addressing all 5 social determinants of health domains. Facilitators included HICS principles, external partnerships, community relationships, senior leadership, health equity experts and networks, champions, equity-stratified data, teaming, and a culture of health equity. Barriers encompassed clarity of the equity representative role, role ambiguity for equity representatives, tokenism, competing priorities, insufficient resource allocation, and lack of preparedness.
Conclusions:
These findings elucidate how health systems centered equity during HICS implementation. Health systems and regulatory bodies can use these findings as a foundation to revise the HICS and move toward a more equitable disaster response.
We investigated concurrent outbreaks of Pseudomonas aeruginosa carrying blaVIM (VIM-CRPA) and Enterobacterales carrying blaKPC (KPC-CRE) at a long-term acute-care hospital (LTACH A).
Methods:
We defined an incident case as the first detection of blaKPC or blaVIM from a patient’s clinical cultures or colonization screening test. We reviewed medical records and performed infection control assessments, colonization screening, environmental sampling, and molecular characterization of carbapenemase-producing organisms from clinical and environmental sources by pulsed-field gel electrophoresis (PFGE) and whole-genome sequencing.
Results:
From July 2017 to December 2018, 76 incident cases were identified from 69 case patients: 51 had blaKPC, 11 had blaVIM, and 7 had blaVIM and blaKPC. Also, blaKPC were identified from 7 Enterobacterales, and all blaVIM were P. aeruginosa. We observed gaps in hand hygiene, and we recovered KPC-CRE and VIM-CRPA from drains and toilets. We identified 4 KPC alleles and 2 VIM alleles; 2 KPC alleles were located on plasmids that were identified across multiple Enterobacterales and in both clinical and environmental isolates.
Conclusions:
Our response to a single patient colonized with VIM-CRPA and KPC-CRE identified concurrent CPO outbreaks at LTACH A. Epidemiologic and genomic investigations indicated that the observed diversity was due to a combination of multiple introductions of VIM-CRPA and KPC-CRE and to the transfer of carbapenemase genes across different bacteria species and strains. Improved infection control, including interventions that minimized potential spread from wastewater premise plumbing, stopped transmission.
Although the idea that existing policies can have major effects on politics and policy development is hardly new, the last three decades witnessed a major expansion of policy feedback scholarship, which focuses on the mechanisms through which existing policies shape politics and policy development. Starting with a discussion of the origins of the concept of policy feedback, this element explores early and more recent contributions of the policy feedback literature to clarify the meaning of this concept and its contribution to both political science and policy studies. After exploring the rapidly expanding scholarship on policy feedback and mass politics, this element also puts forward new research agendas that stress several ways forward, including the need to explain both institutional and policy continuity and change. Finally, the element discusses the practical implications of policy feedback research through a discussion of its potential impact on policy design.This title is also available as Open Access on Cambridge Core.
While much has been written about the politics of retrenchment, in a number of advanced industrial societies social policy expansion does occur today, which raises issues about how to study it in a post-retrenchment era. The present article explores the new politics of social policy expansion in Canada. Drawing on the work of Paul Pierson, we use an integrated framework that highlights the interaction of five factors: the availability of fiscal resources; the emergence of new social risks; the intensity and nature of partisan competition; the policy preferences of the main political parties; and the role of political institutions, especially federalism. Empirically, the article studies the politics of federal social policy expansion during the Harper (2006–2015) and Justin Trudeau (2015–) years, with a focus on three policy areas: child benefits (Universal Child Care Benefit and Canada Child Benefit), pensions (Old Age Security and Canada/Quebec Pension Plan) and Employment Insurance.
Clinical trials, which are mainly conducted in urban medical centers, may be less accessible to rural residents. Our aims were to assess participation and the factors associated with participation of rural residents in clinical trials.
Methods:
Using geocoding, the residential address of participants enrolled into clinical trials at Mayo Clinic locations in Arizona, Florida, and the Midwest between January 1, 2016, and December 31, 2017, was categorized as urban or rural. The distance travelled by participants and trial characteristics was compared between urban and rural participants. Ordinal logistic regression analyses were used to evaluate whether study location and risks were associated with rural participation in trials.
Results:
Among 292 trials, including 136 (47%) cancer trials, there were 2313 participants. Of these, 731 (32%) were rural participants, which is greater than the rural population in these 9 states (19%, P < 0.001). Compared to urban participants, rural participants were older (65 ± 12 years vs 64 ± 12 years, P = 0.004) and travelled further to the medical center (103 ± 104 vs 68 ± 88 miles, P < 0.001). The proportion of urban and rural participants who were remunerated was comparable. In the multivariable analysis, the proportion of rural participants was lower (P < 0.001) in Arizona (10%) and Florida (18%) than the Midwest (38%) but not significantly associated with the study-related risks.
Conclusions:
Approximately one in three clinical trial participants were rural residents versus one in five in the population. Rural residents travelled further to access clinical trials. The study-associated risks were not associated with the distribution of rural and urban participants in trials.
During March 27–July 14, 2020, the Centers for Disease Control and Prevention’s National Healthcare Safety Network extended its surveillance to hospital capacities responding to COVID-19 pandemic. The data showed wide variations across hospitals in case burden, bed occupancies, ventilator usage, and healthcare personnel and supply status. These data were used to inform emergency responses.
The rapid spread of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) throughout key regions of the United States in early 2020 placed a premium on timely, national surveillance of hospital patient censuses. To meet that need, the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN), the nation’s largest hospital surveillance system, launched a module for collecting hospital coronavirus disease 2019 (COVID-19) data. We present time-series estimates of the critical hospital capacity indicators from April 1 to July 14, 2020.
Design:
From March 27 to July 14, 2020, the NHSN collected daily data on hospital bed occupancy, number of hospitalized patients with COVID-19, and the availability and/or use of mechanical ventilators. Time series were constructed using multiple imputation and survey weighting to allow near–real-time daily national and state estimates to be computed.
Results:
During the pandemic’s April peak in the United States, among an estimated 431,000 total inpatients, 84,000 (19%) had COVID-19. Although the number of inpatients with COVID-19 decreased from April to July, the proportion of occupied inpatient beds increased steadily. COVID-19 hospitalizations increased from mid-June in the South and Southwest regions after stay-at-home restrictions were eased. The proportion of inpatients with COVID-19 on ventilators decreased from April to July.
Conclusions:
The NHSN hospital capacity estimates served as important, near–real-time indicators of the pandemic’s magnitude, spread, and impact, providing quantitative guidance for the public health response. Use of the estimates detected the rise of hospitalizations in specific geographic regions in June after they declined from a peak in April. Patient outcomes appeared to improve from early April to mid-July.
Across the social sciences, scholars regularly pool effects over substantial periods of time, a practice that produces faulty inferences if the underlying data generating process is dynamic. To help researchers better perform principled analyses of time-varying processes, we develop a two-stage procedure based upon techniques for permutation testing and statistical process monitoring. Given time series cross-sectional data, we break the role of time through permutation inference and produce a null distribution that reflects a time-invariant data generating process. The null distribution then serves as a stable reference point, enabling the detection of effect changepoints. In Monte Carlo simulations, our randomization technique outperforms alternatives for changepoint analysis. A particular benefit of our method is that, by establishing the bounds for time-invariant effects before interacting with actual estimates, it is able to differentiate stochastic fluctuations from genuine changes. We demonstrate the method’s utility by applying it to a popular study on the relationship between alliances and the initiation of militarized interstate disputes. The example illustrates how the technique can help researchers make inferences about where changes occur in dynamic relationships and ask important questions about such changes.
Drawing on the literature on federalism and public policy, the present article explores the recent politics of two highly-similar and closely integrated Canadian public pension programs created in the mid-1960s: the Canada Pension Plan (CPP) and the Quebec Pension Plan (QPP). This article argues that the parallel evolution of CPP/QPP can be understood by examining how the unique jurisdictional arrangements for the CPP/QPP interacted with other factors to generate by these linked programs have led to the emergence of specific federalism policy dynamics, while muting or foreclosing other potential policy dynamics. As shown, governments have engaged in a process of ‘collusive benchmarking’ that has limited the scope of the available policy options. Differing demographic trends in Quebec and the ‘Rest of Canada’ have strained but also reinforced this policymaking dynamic in recent years. Simultaneously, intergovernmental race to the top dynamics have facilitated the recent push for both CPP and, later, QPP expansion.
Shared patient–clinician decision-making is central to choosing between medical treatments. Decision support tools can have an important role to play in these decisions. We developed a decision support tool for deciding between nonsurgical treatment and surgical total knee replacement for patients with severe knee osteoarthritis. The tool aims to provide likely outcomes of alternative treatments based on predictive models using patient-specific characteristics. To make those models relevant to patients with knee osteoarthritis and their clinicians, we involved patients, family members, patient advocates, clinicians, and researchers as stakeholders in creating the models.
Methods:
Stakeholders were recruited through local arthritis research, advocacy, and clinical organizations. After being provided with brief methodological education sessions, stakeholder views were solicited through quarterly patient or clinician stakeholder panel meetings and incorporated into all aspects of the project.
Results:
Participating in each aspect of the research from determining the outcomes of interest to providing input on the design of the user interface displaying outcome predications, 86% (12/14) of stakeholders remained engaged throughout the project. Stakeholder engagement ensured that the prediction models that form the basis of the Knee Osteoarthritis Mathematical Equipoise Tool and its user interface were relevant for patient–clinician shared decision-making.
Conclusions:
Methodological research has the opportunity to benefit from stakeholder engagement by ensuring that the perspectives of those most impacted by the results are involved in study design and conduct. While additional planning and investments in maintaining stakeholder knowledge and trust may be needed, they are offset by the valuable insights gained.
To enhance enrollment into randomized clinical trials (RCTs), we proposed electronic health record-based clinical decision support for patient–clinician shared decision-making about care and RCT enrollment, based on “mathematical equipoise.”
Objectives:
As an example, we created the Knee Osteoarthritis Mathematical Equipoise Tool (KOMET) to determine the presence of patient-specific equipoise between treatments for the choice between total knee replacement (TKR) and nonsurgical treatment of advanced knee osteoarthritis.
Methods:
With input from patients and clinicians about important pain and physical function treatment outcomes, we created a database from non-RCT sources of knee osteoarthritis outcomes. We then developed multivariable linear regression models that predict 1-year individual-patient knee pain and physical function outcomes for TKR and for nonsurgical treatment. These predictions allowed detecting mathematical equipoise between these two options for patients eligible for TKR. Decision support software was developed to graphically illustrate, for a given patient, the degree of overlap of pain and functional outcomes between the treatments and was pilot tested for usability, responsiveness, and as support for shared decision-making.
Results:
The KOMET predictive regression model for knee pain had four patient-specific variables, and an r2 value of 0.32, and the model for physical functioning included six patient-specific variables, and an r2 of 0.34. These models were incorporated into prototype KOMET decision support software and pilot tested in clinics, and were generally well received.
Conclusions:
Use of predictive models and mathematical equipoise may help discern patient-specific equipoise to support shared decision-making for selecting between alternative treatments and considering enrollment into an RCT.
In recent debates surrounding World Health Organization (WHO) reform, international lawmaking has received unprecedented attention as a future priority function of the Organization. Although WHO's constitutional lawmaking authority was historically neglected and even resisted by WHO and its Member States until the adoption of its first treaty a decade ago, the widespread consensus in favor of a central role for lawmaking in visions of a reformed WHO reflects the crystallization of contemporary approaches to global health governance. Today it is widely recognized that the trends toward globalization that have restricted the capacity of sovereign states to protect health through unilateral action alone have made innovative mechanisms to promote global cooperation and coordination, including international lawmaking, an essential component of governance of public health.
The success of central line-associated bloodstream infection (CLABSI) prevention programs in intensive care units (ICUs) has led to the expansion of surveillance at many hospitals. We sought to compare non-ICU CLABSI (nCLABSI) rates with national reports and describe methods of surveillance at several participating US institutions.
Design and Setting.
An electronic survey of several medical centers about infection surveillance practices and rate data for non-ICU Patients.
Participants.
Ten tertiary care hospitals.
Methods.
In March 2011, a survey was sent to 10 medical centers. The survey consisted of 12 questions regarding demographics and CLABSI surveillance methodology for non-ICU patients at each center. Participants were also asked to provide available rate and device utilization data.
Results.
Hospitals ranged in size from 238 to 1,400 total beds (median, 815). All hospitals reported using Centers for Disease Control and Prevention (CDC) definitions. Denominators were collected by different means: counting patients with central lines every day (5 hospitals), indirectly estimating on the basis of electronic orders (n = 4), or another automated method (n = 1). Rates of nCLABSI ranged from 0.2 to 4.2 infections per 1,000 catheter-days (median, 2.5). The national rate reported by the CDC using 2009 data from the National Healthcare Surveillance Network was 1.14 infections per 1,000 catheter-days.
Conclusions.
Only 2 hospitals were below the pooled CLABSI rate for inpatient wards; all others exceeded this rate. Possible explanations include differences in average central line utilization or hospital size in the impact of certain clinical risk factors notably absent from the definition and in interpretation and reporting practices. Further investigation is necessary to determine whether the national benchmarks are low or whether the hospitals surveyed here represent a selection of outliers.
Imputation of moderate-density genotypes from low-density panels is of increasing interest in genomic selection, because it can dramatically reduce genotyping costs. Several imputation software packages have been developed, but they vary in imputation accuracy, and imputed genotypes may be inconsistent among methods. An AdaBoost-like approach is proposed to combine imputation results from several independent software packages, i.e. Beagle(v3.3), IMPUTE(v2.0), fastPHASE(v1.4), AlphaImpute, findhap(v2) and Fimpute(v2), with each package serving as a basic classifier in an ensemble-based system. The ensemble-based method computes weights sequentially for all classifiers, and combines results from component methods via weighted majority ‘voting’ to determine unknown genotypes. The data included 3078 registered Angus cattle, each genotyped with the Illumina BovineSNP50 BeadChip. SNP genotypes on three chromosomes (BTA1, BTA16 and BTA28) were used to compare imputation accuracy among methods, and the application involved the imputation of 50K genotypes covering 29 chromosomes based on a set of 5K genotypes. Beagle and Fimpute had the greatest accuracy among the six imputation packages, which ranged from 0·8677 to 0·9858. The proposed ensemble method was better than any of these packages, but the sequence of independent classifiers in the voting scheme affected imputation accuracy. The ensemble systems yielding the best imputation accuracies were those that had Beagle as first classifier, followed by one or two methods that utilized pedigree information. A salient feature of the proposed ensemble method is that it can solve imputation inconsistencies among different imputation methods, hence leading to a more reliable system for imputing genotypes relative to independent methods.
To evaluate the impact of postprescription review of broad-spectrum antimicrobial (study-ABX) agents on rates of antimicrobial use.
Design.
Quasi-experimental before-after study.
Setting.
Five academic medical centers.
Patients.
Adults receiving at least 48 hours of study-ABX.
Methods.
The baseline, intervention, and follow-up periods were 6 months each in 2 units at each of 5 sites. Adults receiving at least 48 hours of study-ABX entered the cohort as case-patients. During the intervention, infectious-diseases physicians reviewed the cases after 48 hours of study-ABX. The provider was contacted with alternative recommendations if antimicrobial use was considered to be unjustified on the basis of predetermined criteria. Acceptance rates were assessed 48 hours later. The primary outcome measure was days of study-ABX per 1,000 study-patient-days in the baseline and intervention periods.
Results.
There were 1,265 patients in the baseline period and 1,163 patients in the intervention period. Study-ABX use decreased significantly during the intervention period at 2 sites: from 574.4 to 533.8 study-ABX days/1,000 patient-days (incidence rate ratio [IRR], 0.93; 95% confidence interval [CI], 0.88-0.97; P = .002) at hospital В and from 615.6 to 514.4 study-ABX days/1,000 patient-days (IRR, 0.83; 95% CI, 0.79-0.88; P < .001) at hospital D. Both had established antimicrobial stewardship programs (ASP). Study-ABX use increased at 2 sites and stayed the same at 1 site. At all institutions combined, 390 of 1,429 (27.3%) study-ABX courses were assessed as unjustified; recommendations to modify or stop therapy were accepted for 260 (66.7%) of these courses.
Conclusions.
Postprescription review of study-ABX decreased antimicrobial utilization in some of the study hospitals and may be more effective when performed as part of an established ASP.
Bayesian regularization of artificial neural networks (BRANNs) were used to predict body mass index (BMI) in mice using single nucleotide polymorphism (SNP) markers. Data from 1896 animals with both phenotypic and genotypic (12 320 loci) information were used for the analysis. Missing genotypes were imputed based on estimated allelic frequencies, with no attempt to reconstruct haplotypes based on family information or linkage disequilibrium between markers. A feed-forward multilayer perceptron network consisting of a single output layer and one hidden layer was used. Training of the neural network was done using the Bayesian regularized backpropagation algorithm. When the number of neurons in the hidden layer was increased, the number of effective parameters, γ, increased up to a point and stabilized thereafter. A model with five neurons in the hidden layer produced a value of γ that saturated the data. In terms of predictive ability, a network with five neurons in the hidden layer attained the smallest error and highest correlation in the test data although differences among networks were negligible. Using inherent weight information of BRANN with different number of neurons in the hidden layer, it was observed that 17 SNPs had a larger impact on the network, indicating their possible relevance in prediction of BMI. It is concluded that BRANN may be at least as useful as other methods for high-dimensional genome-enabled prediction, with the advantage of its potential ability of capturing non-linear relationships, which may be useful in the study of quantitative traits under complex gene action.