We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Edited by
Laurie J. Mckenzie, University of Texas MD Anderson Cancer Center, Houston,Denise R. Nebgen, University of Texas MD Anderson Cancer Center, Houston
A small but important fraction of cancer are primarily due to a hereditary cancer predisposition, and their diagnosis has significant clinical implications for both index cases and their families. Germline BRCA1BRCA/2 pathogenic variants (PVs) can lead to the Hereditary Breast and Ovarian Cancer (HBOC) Syndrome and identification of both germline and somatic BRCA1/BRCA2 PVs have important treatment implications. In addition, endometrial cancer is closely associated with inherited PVs in the mismatch repair (MMR) genes which leads to Lynch syndrome. Both HBOC and Lynch syndrome affect around 1:300 people, most of whom are undiagnosed. Genetic panel testing is crucial to identifying PV carriers, before a sentinel cancer, who can then be offered prophylactic interventions such as risk reducing salpingo-oophorectomy (RRSO). Within this chapter we discuss the most common hereditary cancer syndromes associated with gynecological cancer. These include HBOC, Lynch syndrome, the moderate penetrant genes including RAD51C, RAD51D, BRIP1, PALB2, and ATM as well as rarer hereditary cancer syndromes including Cowden syndrome (PTEN), DICER1, Rhabdoid Tumor Predisposition syndrome (SMARCB1, SMARCA4) and Peutz-Jeghers syndrome (STK11).
Agitation is a common neuropsychiatric symptom in Alzheimer’s dementia. The Cohen-Mansfield Agitation Inventory (CMAI) assesses the frequency of 29 agitation behaviors in elderly persons. The frequency of each behavior is rated from 1–7 (1=never, 2=less than once a week, 3=once or twice a week, 4=several times a week, 5=once or twice a day, 6=several times a day, 7=several times an hour), typically reported as a single total score. This post hoc analysis explored the efficacy of brexpiprazole on the frequency of individual agitation behaviors.
Methods:
Post hoc analyses were conducted for two 12-week, randomized, double- blind, placebo-controlled, parallel-arm, fixed-dose trials of brexpiprazole in patients with agitation in Alzheimer’s dementia (NCT01862640, NCT03548584). Data are reported using descriptive statistics for brexpiprazole (2 or 3 mg/day) and placebo, for patients who completed 12 weeks of treatment.
Results:
In the first fixed-dose trial (brexpiprazole 2 mg/day, n=120; placebo, n=118), baseline behavior frequency was similar between groups (range 1.12 to 4.92). At baseline, the most frequently observed behavior was “general restlessness” (brexpiprazole, 4.92; placebo, 4.82; approximately “once or twice a day”), and the least frequently observed behaviors were “biting” (brexpiprazole, 1.12) and “making physical sexual advances” (placebo, 1.14). At Week 12, the average reduction in mean frequency was -0.73 (brexpiprazole) and -0.60 (placebo), with a greater numerical reduction for 21/29 behaviors with brexpiprazole versus placebo. In the second fixed-dose trial (brexpiprazole 2 or 3 mg/day, n=192; placebo, n=103), baseline behavior frequency was similar between groups (range 1.12 to 5.22), and higher than in the first trial due to study inclusion criteria. At baseline, the most frequently observed behavior was “general restlessness” (brexpiprazole, 5.22; placebo, 5.09; approximately “once or twice a day”), and the least frequently observed behaviors were “making physical sexual advances” (brexpiprazole, 1.13) and “intentional falling” (placebo, 1.12). At Week 12, the average reduction in mean frequency was -0.78 (brexpiprazole) and -0.54 (placebo), with a greater numerical reduction for 26/29 behaviors with brexpiprazole versus placebo.
Conclusion:
In this post hoc analysis, brexpiprazole was associated with numerically greater reduction in the frequency of most individual agitation behaviors versus placebo.
Adherence plays a vital role in the effectiveness of non-pharmacological interventions. The disappearance of interventions’ effects at follow-up was caused by inadequate self-practice beyond intervention period. The purpose of this study is to examine the factors associated with adherence to aerobic exercise and Tai Chi and the impact of adherence on the short- and long-term effectiveness in improving sleep in patients with advanced lung cancer.
Methods
This study analyzed data collected in a clinical trial that evaluated the effects of aerobic exercise and Tai Chi in patients with advanced lung cancer. Two types of exercises were maintained at the same intensity but with different dosage. A total of 99 patients with advanced lung cancer who were recruited between 2018 and 2020 were included. Data were collected using self-report questionnaires.
Results
Fifty participants were randomly assigned to aerobic exercise and 49 to Tai Chi intervention. Higher levels of satisfaction and lower levels of depression were significantly associated with higher attendance and compliance in both groups. Low fatigue levels contributed to higher attendance in Tai Chi. Both attendance and compliance were significantly associated with long-term sleep improvement.
Significance of results
Higher levels of satisfaction and lower levels of depression were important characteristics of attendance and compliance with home-based practice in both groups, whereas lower levels of fatigue uniquely contributed to higher attendance in Tai Chi. Better exercise adherence improves long-term effectiveness of sleep in patients with advanced lung cancer. Adopting strategies is imperative to promote exercise adherence in patients with greater levels of depression and fatigue.
U.S. veterans report high rates of traumatic experiences and mental health symptomology [e.g. posttraumatic stress disorder (PTSD)]. The stress sensitization hypothesis posits experiences of adversity sensitize individuals to stress reactions which can lead to greater psychiatric problems. We extend this hypothesis by exploring how multiple adversities such as early childhood adversity, combat-related trauma, and military sexual trauma related to heterogeneity in stress over time and, subsequently, greater risk for PTSD.
Methods
1230 veterans were recruited for an observational, longitudinal study. Veterans responded to questionnaires on PTSD, stress, and traumatic experiences five times over an 18-month study period. We used latent transition analysis to understand how heterogeneity in adverse experiences is related to transition into stress trajectory classes. We also explored how transition patterns related to PTSD symptomology.
Results
Across all models, we found support for stress sensitization. In general, combat trauma in combinations with other types of adverse experiences, namely early childhood adversity and military sexual trauma, imposed a greater probability of transitioning into higher risk stress profiles. We also showed differential effects of early childhood and military-specific adversity on PTSD symptomology.
Conclusion
The present study rigorously integrates both military-specific and early life adversity into analysis on stress sensitivity, and is the first to examine how sensitivity might affect trajectories of stress over time. Our study provides a nuanced, and specific, look at who is risk for sensitization to stress based on previous traumatic experiences as well as what transition patterns are associated with greater PTSD symptomology.
Underrepresentation of Black biomedical researchers demonstrates continued racial inequity and lack of diversity in the field. The Black Voices in Research curriculum was designed to provide effective instructional materials that showcase inclusive excellence, facilitate the dialog about diversity and inclusion in biomedical research, enhance critical thinking and reflection, integrate diverse visions and worldviews, and ignite action. Instructional materials consist of short videos and discussion prompts featuring Black biomedical research faculty and professionals. Pilot evaluation of instructional content showed that individual stories promoted information relevance, increased knowledge, and created behavioral intention to promote diversity and inclusive excellence in biomedical research.
During March 27–July 14, 2020, the Centers for Disease Control and Prevention’s National Healthcare Safety Network extended its surveillance to hospital capacities responding to COVID-19 pandemic. The data showed wide variations across hospitals in case burden, bed occupancies, ventilator usage, and healthcare personnel and supply status. These data were used to inform emergency responses.
The rapid spread of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) throughout key regions of the United States in early 2020 placed a premium on timely, national surveillance of hospital patient censuses. To meet that need, the Centers for Disease Control and Prevention’s National Healthcare Safety Network (NHSN), the nation’s largest hospital surveillance system, launched a module for collecting hospital coronavirus disease 2019 (COVID-19) data. We present time-series estimates of the critical hospital capacity indicators from April 1 to July 14, 2020.
Design:
From March 27 to July 14, 2020, the NHSN collected daily data on hospital bed occupancy, number of hospitalized patients with COVID-19, and the availability and/or use of mechanical ventilators. Time series were constructed using multiple imputation and survey weighting to allow near–real-time daily national and state estimates to be computed.
Results:
During the pandemic’s April peak in the United States, among an estimated 431,000 total inpatients, 84,000 (19%) had COVID-19. Although the number of inpatients with COVID-19 decreased from April to July, the proportion of occupied inpatient beds increased steadily. COVID-19 hospitalizations increased from mid-June in the South and Southwest regions after stay-at-home restrictions were eased. The proportion of inpatients with COVID-19 on ventilators decreased from April to July.
Conclusions:
The NHSN hospital capacity estimates served as important, near–real-time indicators of the pandemic’s magnitude, spread, and impact, providing quantitative guidance for the public health response. Use of the estimates detected the rise of hospitalizations in specific geographic regions in June after they declined from a peak in April. Patient outcomes appeared to improve from early April to mid-July.
Despite the substantial investment by Australian health authorities to improve the health of rural and remote communities, rural residents continue to experience health care access challenges and poorer health outcomes. Health literacy and community engagement are both considered critical in addressing these health inequities. However, the current focus on health literacy can place undue burdens of responsibility for healthcare on individuals from disadvantaged communities whilst not taking due account of broader community needs and healthcare expectations. This can also marginalize the influence of community solidarity and mobilization in effecting healthcare improvements.
Objective:
The objective is to present a conceptual framework that describes community literacy, its alignment with health literacy, and its relationship to concepts of community engaged healthcare.
Findings:
Community literacy aims to integrate community knowledge, skills and resources into the design, delivery and adaptation of healthcare policies, and services at regional and local levels, with the provision of primary, secondary, and tertiary healthcare that aligns to individual community contexts. A set of principles is proposed to support the development of community literacy. Three levels of community literacy education for health personnel have been described that align with those applied to health literacy for consumers. It is proposed that community literacy education can facilitate transformational community engagement. Skills acquired by health personnel from senior executives to frontline clinical staff, can also lead to enhanced opportunities to promote health literacy for individuals.
Conclusions:
The integration of health and community literacy provides a holistic framework that has the potential to effectively respond to the diversity of rural and remote Australian communities and their healthcare needs and expectations. Further research is required to develop, validate, and evaluate the three levels of community literacy education and alignment to health policy, prior to promoting its uptake more widely.
Background: The NHSN has used positive laboratory tests for surveillance of Clostridioides difficile infection (CDI) LabID events since 2009. Typically, CDIs are detected using enzyme immunoassays (EIAs), nucleic acid amplification tests (NAATs), or various test combinations. The NHSN uses a risk-adjusted, standardized infection ratio (SIR) to assess healthcare facility-onset (HO) CDI. Despite including test type in the risk adjustment, some hospital personnel and other stakeholders are concerned that NAAT use is associated with higher SIRs than are EIAs. To investigate this issue, we analyzed NHSN data from acute-care hospitals for July 1, 2017 through June 30, 2018. Methods: Calendar quarters for which CDI test type was reported as NAAT (includes NAAT, glutamate dehydrogenase (GDH)+NAAT and GDH+EIA followed by NAAT if discrepant) or EIA (includes EIA and GDH+EIA) were selected. HO CDI SIRs were calculated for facility-wide inpatient locations. We conducted the following analyses: (1) Among hospitals that did not switch their test type, we compared the distribution of HO incident rates and SIRs by those reporting NAAT vs EIA. (2) Among hospitals that switched their test type, we selected quarters with a stable switch pattern of 2 consecutive quarters of each of EIA and NAAT (categorized as pattern EIA-to-NAAT or NAAT-to-EIA). Pooled semiannual SIRs for EIA and NAAT were calculated, and a paired t test was used to evaluate the difference of SIRs by switch pattern. Results: Most hospitals did not switch test types (3,242, 89%), and 2,872 (89%) reported sufficient data to calculate SIRs, with 2,444 (85%) using NAAT. The crude pooled HO CDI incidence rates for hospitals using EIA clustered at the lower end of the histogram versus rates for NAAT (Fig. 1). The SIR distributions of both NAAT and EIA overlapped substantially and covered a similar range of SIR values (Fig. 1). Among hospitals with a switch pattern, hospitals were equally likely to have an increase or decrease in their SIR (Fig. 2). The mean SIR difference for the 42 hospitals switching from EIA to NAAT was 0.048 (95% CI, −0.189 to 0.284; P = .688). The mean SIR difference for the 26 hospitals switching from NAAT to EIA was 0.162 (95% CI, −0.048 to 0.371; P = .124). Conclusions: The pattern of SIR distributions of both NAAT and EIA substantiate the soundness of NHSN risk adjustment for CDI test types. Switching test type did not produce a consistent directional pattern in SIR that was statistically significant.
Background: The National Healthcare Safety Network (NHSN) has used positive laboratory tests for surveillance of Clostridioides difficile infection (CDI) LabID events since 2009. Typically, CDIs are detected using enzyme immunoassays (EIAs), nucleic acid amplification tests (NAATs), or various test combinations. The NHSN uses a risk-adjusted, standardized infection ratio (SIR) to assess healthcare facility-onset (HO) CDI. Despite including test type in the risk adjustment, some hospital personnel and other stakeholders are concerned that NAAT use is associated with higher SIRs than EIA use. To investigate this issue, we analyzed NHSN data from acute-care hospitals for July 1, 2017, through June 30, 2018. Methods: Calendar quarters where CDI test type was reported as NAAT (includes NAAT, glutamate dehydrogenase (GDH)+NAAT and GDH+EIA followed by NAAT if discrepant) or EIA (includes EIA and GDH+EIA) were selected. HO-CDI SIRs were calculated for facility-wide inpatient locations. We conducted the following 2 analyses: (1) Among hospitals that did not switch their test type, we compared the distribution of HO incident rates and SIRs by those reporting NAAT versus EIA. (2) Among hospitals that switched their test type, we selected quarters with a stable switch pattern of 2 consecutive quarters of each of EIA and NAAT (categorized as EIA-to-NAAT or NAAT-to-EIA). Pooled semiannual SIRs for EIA and NAAT were calculated, and a paired t test was used to evaluate the difference in SIRs by switch pattern. Results: Most hospitals did not switch test types (3,242, 89%), and 2,872 (89%) reported sufficient data to calculate an SIR, with 2,444 (85%) using NAAT. The crude pooled HO CDI incidence rates for hospitals using EIAs clustered at the lower end of the histogram versus rates for NAATs (Fig. 1). The SIR distributions, both NAATs and EIAs, overlapped substantially and covered a similar range of SIR values (Fig. 1). Among hospitals with a switch pattern, hospitals were equally likely to have an increase or decrease in their SIRs (Fig. 2). The mean SIR difference for the 42 hospitals switching from EIA to NAAT was 0.048 (95% CI, −0.189 to 0.284; P = .688). The mean SIR difference for the 26 hospitals switching from NAAT to EIA was 0.162 (95% CI, −0.048 to 0.371; P = .124). Conclusions: The pattern of SIR distribution for both NAAT and EIA substantiate the soundness of the NHSN’s risk adjustment for CDI test types. Switching test type did not produce a consistent directional pattern in SIR that was statistically significant.
Background: Antimicrobial resistance (AMR) is an increasingly critical global public health challenge. An initial step in prevention is the understanding of resistance patterns with accurate surveillance. To improve accurate surveillance and good clinical care, we developed training materials to improve the appropriate collection of clinical culture samples in Ethiopia. Methods: Specimen-collection training materials were initially developed by a team of infectious diseases physicians, a clinical microbiologist, and a monitoring and evaluation specialist using a training of trainers (ToT) platform. Revisions after each training session were provided by Ethiopian attendees including the addition of regional and culturally relevant material. The training format involved didactic presentations, interactive practice sessions with participants providing feedback and training to each other and the entire group as well as assessments of all training activities. Results: Overall, 4 rounds of training were conducted from August 2017 to September 2019. The first 2 rounds of training were conducted by The Ohio State University (OSU) staff, and Ethiopian trainers conducted the last 2 rounds. Initial training was primarily in lecture format outlining use of microbiology laboratory findings in clinical practice and steps for collecting specimens correctly. Appropriate specimen collection was demonstrated and practiced. Essential feedback from this early audience provided input for the final development of the training manual and visual aids. The ToT for master trainers took place in July 2018 and was conducted by OSU staff. In sessions held in February and August 2019, these master trainers provided training to facility trainers, who provide training to personnel directly responsible for specimen collection. In total, 144 healthcare personnel (including physicians, nurses, and laboratory staff), from 12 representative Ethiopian public and academic hospitals participated in the trainings. Participants were satisfied with the quality of the training (typically ranked >4.5 of 5.0) and strongly agreed that the objectives were clearly defined and that the information was relevant to their work. Posttraining scores increased by 23%. Conclusions: Training materials for clinical specimen collection have been developed for use in low- and middle-resource settings and with initial pilot testing and adoption in Ethiopia. The trainings were well accepted, and Ethiopian personnel were able to successfully lead the trainings and improve their knowledge and skills regarding specimen collection. The materials are being finalized in an online format for easier open access dissemination. Further studies are planned to determine the effectiveness of the trainings in improving the quality of clinical specimen submissions to the microbiology laboratory.
Shared patient–clinician decision-making is central to choosing between medical treatments. Decision support tools can have an important role to play in these decisions. We developed a decision support tool for deciding between nonsurgical treatment and surgical total knee replacement for patients with severe knee osteoarthritis. The tool aims to provide likely outcomes of alternative treatments based on predictive models using patient-specific characteristics. To make those models relevant to patients with knee osteoarthritis and their clinicians, we involved patients, family members, patient advocates, clinicians, and researchers as stakeholders in creating the models.
Methods:
Stakeholders were recruited through local arthritis research, advocacy, and clinical organizations. After being provided with brief methodological education sessions, stakeholder views were solicited through quarterly patient or clinician stakeholder panel meetings and incorporated into all aspects of the project.
Results:
Participating in each aspect of the research from determining the outcomes of interest to providing input on the design of the user interface displaying outcome predications, 86% (12/14) of stakeholders remained engaged throughout the project. Stakeholder engagement ensured that the prediction models that form the basis of the Knee Osteoarthritis Mathematical Equipoise Tool and its user interface were relevant for patient–clinician shared decision-making.
Conclusions:
Methodological research has the opportunity to benefit from stakeholder engagement by ensuring that the perspectives of those most impacted by the results are involved in study design and conduct. While additional planning and investments in maintaining stakeholder knowledge and trust may be needed, they are offset by the valuable insights gained.
To enhance enrollment into randomized clinical trials (RCTs), we proposed electronic health record-based clinical decision support for patient–clinician shared decision-making about care and RCT enrollment, based on “mathematical equipoise.”
Objectives:
As an example, we created the Knee Osteoarthritis Mathematical Equipoise Tool (KOMET) to determine the presence of patient-specific equipoise between treatments for the choice between total knee replacement (TKR) and nonsurgical treatment of advanced knee osteoarthritis.
Methods:
With input from patients and clinicians about important pain and physical function treatment outcomes, we created a database from non-RCT sources of knee osteoarthritis outcomes. We then developed multivariable linear regression models that predict 1-year individual-patient knee pain and physical function outcomes for TKR and for nonsurgical treatment. These predictions allowed detecting mathematical equipoise between these two options for patients eligible for TKR. Decision support software was developed to graphically illustrate, for a given patient, the degree of overlap of pain and functional outcomes between the treatments and was pilot tested for usability, responsiveness, and as support for shared decision-making.
Results:
The KOMET predictive regression model for knee pain had four patient-specific variables, and an r2 value of 0.32, and the model for physical functioning included six patient-specific variables, and an r2 of 0.34. These models were incorporated into prototype KOMET decision support software and pilot tested in clinics, and were generally well received.
Conclusions:
Use of predictive models and mathematical equipoise may help discern patient-specific equipoise to support shared decision-making for selecting between alternative treatments and considering enrollment into an RCT.
There is no suitable vaccine against human visceral leishmaniasis (VL) and available drugs are toxic and/or present high cost. In this context, diagnostic tools should be improved for clinical management and epidemiological evaluation of disease. However, the variable sensitivity and/or specificity of the used antigens are limitations, showing the necessity to identify new molecules to be tested in a more sensitive and specific serology. In the present study, an immunoproteomics approach was performed in Leishmania infantum promastigotes and amastigotes employing sera samples from VL patients. Aiming to avoid undesired cross-reactivity in the serological assays, sera from Chagas disease patients and healthy subjects living in the endemic region of disease were also used in immunoblottings. The most reactive spots for VL samples were selected, and 29 and 21 proteins were identified in the promastigote and amastigote extracts, respectively. Two of them, endonuclease III and GTP-binding protein, were cloned, expressed, purified and tested in ELISA experiments against a large serological panel, and results showed high sensitivity and specificity values for the diagnosis of disease. In conclusion, the identified proteins could be considered in future studies as candidate antigens for the serodiagnosis of human VL.
Farmers’ market interventions are a popular strategy for addressing chronic disease disparities in low-income neighbourhoods. With limited resources, strategic targeting of interventions is critical. The present study used spatial analysis to identify where market interventions have the greatest impact on healthy food access within a geographic region.
Design
All farmers’ markets in a mixed urban/rural county were mapped and those that accepted Supplemental Nutrition Assistance Program (SNAP) electronic benefit transfer (EBT) cards identified. Households were grouped into small neighbourhoods and mapped. The area of ‘reasonable access’ around each market (walking distance (0·8 km; 0·5mile) in urban areas, driving distance (15 min) in rural areas) was calculated using spatial analysis. The percentage of county low-income households within a market’s access area, and the percentage of county SNAP-participating households within an EBT-accepting market’s access area, were calculated. The ten neighbourhoods with the most low-income households and with the most SNAP-participating households were then identified, their access areas calculated and mapped, and those lacking access identified. County-level gains resulting from improving market accessibility in these areas were calculated.
Subjects
None.
Setting
Honolulu County, Hawaii, USA.
Results
Only 44 % of SNAP-participating households had EBT-market access. Six of the ten highest SNAP-participant neighbourhoods lacked access. Improving access for these neighbourhoods increased county-level access by 23 %. Market access for low-income households was 74 %. Adding markets to these low-income neighbourhoods without market access increased county-level access by 4 %.
Conclusions
Geographic identification of market access demographics, and strategic targeting of EBT interventions, could improve regional access to healthy foods.
Weedy species of the genus Amaranthus, commonly referred to as pigweeds, have increased in frequency and severity over the past few years. Identification of these weeds is difficult because of similar morphological characteristics among species and variation within species. Studies were initiated to develop a molecular marker identification system utilizing restriction enzyme analysis of amplified ribosomal DNA (rDNA). A set of polymerase chain reaction (PCR) markers was developed to distinguish 10 weedy species of pigweeds. Restriction-site variation, utilizing five endonucleases, within the internal transcribed spacers (ITS) of the rDNA allowed for the positive identification of eight species and one pair of species. These markers will be useful for biological and ecological studies on the genus.
Herbicide resistance has been reported in several Amaranthus species throughout the U.S. Because evidence exists of interspecies hybridization in some species of this genus, this study was conducted to determine whether acetolactate synthase (ALS)-inhibiting herbicide resistance could be transferred from Amaranthus palmeri to Amaranthus rudis through interspecific crosses. Plants of each species were grown in a growth chamber, and controlled interspecies crosses were made between ALS-resistant and -susceptible plants. A total of 15 putative hybrid plants were produced from an estimated 10,000 cross-pollinated flowers. Analysis of restriction enzyme digests of the ALS gene in which a single base substitution confers resistance inferred that herbicide resistance had been transferred from a resistant male A. rudis to the hybrid plant. Offspring of hybrid plants, backcrossed to the susceptible parent, survived herbicide treatment, demonstrating that herbicide resistance was transferred between species. DNA analysis also was performed using the amplified fragment length polymorphism (AFLP) technique between parental and putative hybrid plants. Several unique bands were found only in the hybrid.
Well-established methods exist for measuring party positions, but reliable means for estimating intra-party preferences remain underdeveloped. While most efforts focus on estimating the ideal points of individual legislators based on inductive scaling of roll call votes, this data suffers from two problems: selection bias due to unrecorded votes and strong party discipline, which tends to make voting a strategic rather than a sincere indication of preferences. By contrast, legislative speeches are relatively unconstrained, as party leaders are less likely to punish MPs for speaking freely as long as they vote with the party line. Yet, the differences between roll call estimations and text scalings remain essentially unexplored, despite the growing application of statistical analysis of textual data to measure policy preferences. Our paper addresses this lacuna by exploiting a rich feature of the Swiss legislature: on most bills, legislators both vote and speak many times. Using this data, we compare text-based scaling of ideal points to vote-based scaling from a crucial piece of energy legislation. Our findings confirm that text scalings reveal larger intra-party differences than roll calls. Using regression models, we further explain the differences between roll call and text scalings by attributing differences to constituency-level preferences for energy policy.