We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Fasedienol (PH94B; 3β-androsta-4,16-dien-3-ol) is a synthetic neuroactive nasal spray from the androstane family of pherines. Intranasal fasedienol activates receptors in peripheral nasal chemosensory neurons connected to subsets of neurons in the olfactory bulbs that in turn are neurally connected to neurons in the limbic amygdala involved in the pathophysiology of SAD and potentially other anxiety and mood disorders. Fasedienol is locally metabolized in the olfactory mucosa without systemic uptake or binding to CNS receptors. The objective of the present study was to compare fasedienol vs. placebo during a public speaking challenge in subjects with SAD.
Methods
This was a multi-center, double-blind, randomized, placebo-controlled study (NCT05011396). After screening (Visit 1), all subjects completed Visit 2 (V2, Baseline, placebo nasal spray administered to all subjects) and participated in a 5-minute public speaking challenge (PSC) during which Subjective Units of Distress Scores (SUDS) were recorded. Subjects with SUDS >= to 70 were invited back a week later for the Visit 3 (V3) treatment visit and randomly allocated to receive either fasedienol (3.2 μg intranasally) or placebo, then undergo a second 5-minute PSC, with SUDS scores recorded. After the V3 PSC, subjects completed a Patient Global Impression of Change (PGI-C) and trained raters completed a Clinical Global Impression of Improvement (CGI-I). CGI-I responders were defined as those assigned scores of 1 (very much improved) or 2 (much improved); PGI-C responders reported scores of 1 (very much less anxious) or 2 (much less anxious). ANCOVA with baseline SUDS as a covariate was used to compare change in mean SUDS from V2 to V3 for the subjects administered fasedienol at V3 vs those who received placebo at V3.
Results
Fasedienol-treated patients (n=70) demonstrated a statistically significant greater change in mean SUDS score (least-squares (LS) mean = -13.8) compared with placebo (n=71, LS mean = -8.0), for a difference between groups of -5.8 (p=0.015). The proportion of CGI-I responders was higher in the fasedienol group 37.7% vs. placebo 21.4% (p=0.033), as was the proportion of PGI-C responders: fasedienol 40.6% vs. placebo 18.6% (p=0.003). Fasedienol was well-tolerated with no treatment-emergent adverse events above 1.5% occurrence.
Conclusion
The Phase 3 PALISADE-2 trial results demonstrated that a single dose of fasedienol prior to a stressful PSC produced efficacy on patient-rated SUDS and PGI-C, as well as the clinician-rated CGI-I. The results also confirmed the nasal-amygdala neural circuits as a new portal for administration of pharmaceuticals. The data support continued development of fasedienol as a first-in-class, rapid-onset, well-tolerated treatment option for SAD without addictive properties.
Hierarchical Bayes procedures for the two-parameter logistic item response model were compared for estimating item and ability parameters. Simulated data sets were analyzed via two joint and two marginal Bayesian estimation procedures. The marginal Bayesian estimation procedures yielded consistently smaller root mean square differences than the joint Bayesian estimation procedures for item and ability estimates. As the sample size and test length increased, the four Bayes procedures yielded essentially the same result.
Women are globally underrepresented as political leaders; as of January 2023, only 17 countries had a woman head of government. Included in this small group is Samoa, which elected Fiame Naomi Mata’afa as its first woman prime minister in 2021 after a fiercely contested election and subsequent protracted legal disputes centered around interpretations of Samoa’s 10% gender quota. Drawing on data from the Pacific Attitudes Survey, the first large-scale, nationally representative popular political attitudes survey conducted in the Pacific region, this article examines how the political environment in Samoa shapes opportunities for women’s political participation and leadership. Using the theoretical framework of cohabitation, it finds that although there is an enabling environment for women’s participation and leadership in formal politics, women’s access to decision-making spaces more broadly is still constrained by norms of traditional leadership. This speaks to traditional and nontraditional political norms and practices that coexist, at times uneasily, alongside one another.
Declining labor force participation of older men throughout the 20th century and recent increases in participation have generated substantial interest in understanding the effect of public pensions on retirement. The National Bureau of Economic Research's International Social Security (ISS) Project, a long-term collaboration among researchers in a dozen developed countries, has explored this and related questions. The project employs a harmonized approach to conduct within-country analyses that are combined for meaningful cross-country comparisons. The key lesson is that the choices of policy makers affect the incentive to work at older ages and these incentives have important effects on retirement behavior.
To characterize the relationship between chlorhexidine gluconate (CHG) skin concentration and skin microbial colonization.
Design:
Serial cross-sectional study.
Setting/participants:
Adult patients in medical intensive care units (ICUs) from 7 hospitals; from 1 hospital, additional patients colonized with carbapenemase-producing Enterobacterales (CPE) from both ICU and non-ICU settings. All hospitals performed routine CHG bathing in the ICU.
Methods:
Skin swab samples were collected from adjacent areas of the neck, axilla, and inguinal region for microbial culture and CHG skin concentration measurement using a semiquantitative colorimetric assay. We used linear mixed effects multilevel models to analyze the relationship between CHG concentration and microbial detection. We explored threshold effects using additional models.
Results:
We collected samples from 736 of 759 (97%) eligible ICU patients and 68 patients colonized with CPE. On skin, gram-positive bacteria were cultured most frequently (93% of patients), followed by Candida species (26%) and gram-negative bacteria (20%). The adjusted odds of microbial recovery for every twofold increase in CHG skin concentration were 0.84 (95% CI, 0.80–0.87; P < .001) for gram-positive bacteria, 0.93 (95% CI, 0.89–0.98; P = .008) for Candida species, 0.96 (95% CI, 0.91–1.02; P = .17) for gram-negative bacteria, and 0.94 (95% CI, 0.84–1.06; P = .33) for CPE. A threshold CHG skin concentration for reduced microbial detection was not observed.
Conclusions:
On a cross-sectional basis, higher CHG skin concentrations were associated with less detection of gram-positive bacteria and Candida species on the skin, but not gram-negative bacteria, including CPE. For infection prevention, targeting higher CHG skin concentrations may improve control of certain pathogens.
This study aimed to understand the population and contact tracer uptake of the quick response (QR)-code-based function of the New Zealand COVID Tracer App (NZCTA) used for digital contact tracing (DCT). We used a retrospective cohort of all COVID-19 cases between August 2020 and February 2022. Cases of Asian and other ethnicities were 2.6 times (adjusted relative risk (aRR) 2.58, 99 per cent confidence interval (95% CI) 2.18, 3.05) and 1.8 times (aRR 1.81, 95% CI 1.58, 2.06) more likely than Māori cases to generate a token during the Delta period, and this persisted during the Omicron period. Contact tracing organization also influenced location token generation with cases handled by National Case Investigation Service (NCIS) staff being 2.03 (95% CI 1.79, 2.30) times more likely to generate a token than cases managed by clinical staff at local Public Health Units (PHUs). Public uptake and participation in the location-based system independent of contact tracer uptake were estimated at 45%. The positive predictive value (PPV) of the QR code system was estimated to be close to nil for detecting close contacts but close to 100% for detecting casual contacts. Our paper shows that the QR-code-based function of the NZCTA likely made a negligible impact on the COVID-19 response in New Zealand (NZ) in relation to isolating potential close contacts of cases but likely was effective at identifying and notifying casual contacts.
Occurrence of cryptosporidiosis has been associated with weather conditions in many settings internationally. We explored statistical clusters of human cryptosporidiosis and their relationship with severe weather events in New Zealand (NZ). Notified cases of cryptosporidiosis from 1997 to 2015 were obtained from the national surveillance system. Retrospective space–time permutation was used to identify statistical clusters. Cluster data were compared to severe weather events in a national database. SaTScan analysis detected 38 statistically significant cryptosporidiosis clusters. Around a third (34.2%, 13/38) of these clusters showed temporal and spatial alignment with severe weather events. Of these, nearly half (46.2%, 6/13) occurred in the spring. Only five (38%, 5/13) of these clusters corresponded to a previously reported cryptosporidiosis outbreak. This study provides additional evidence that severe weather events may contribute to the development of some cryptosporidiosis clusters. Further research on this association is needed as rainfall intensity is projected to rise in NZ due to climate change. The findings also provide further arguments for upgrading the quality of drinking water sources to minimize contamination with pathogens from runoff from livestock agriculture.
OBJECTIVES/GOALS: High serum copper (Cu) levels have previously been described in bariatric patients. The kidneys are a target organ for Cu toxic insult but the role of Cu on kidney function (eGFR) is uncertain. This study examines the association between Cu and eGFR in a bariatric population in Southeast Louisiana. METHODS/STUDY POPULATION: Seven hundred fifty patients will be recruited from the Bariatric Center of the University Medical Center in New Orleans. Inclusion criteria include: age ≥ 18 years, clinic visit between June 1, 2018 – May 31st 2024, and having a serum Cu test result. Covariables such as inflammatory markers and hormonal contraception use will be assessed as potential confounders. Blood pressure will be assessed as a potential effect modifier. Data will be obtained from electronic medical records. Two cohorts will be assembled, a pre-surgery cross-sectional cohort and another followed post-surgery. Separate models will be developed stratified by race-ethnicity. RESULTS/ANTICIPATED RESULTS: In a pilot study of bariatric patients 26% had elevated (>155 mcg/dl) serum Cu and pronounced racial differences were noted. Characteristics consisted of a mean BMI of approximately 50 kg/m2; 91% were female and 69% were Black. Black patients had approximately double the prevalence (OR 1.98; 95% CI: 1.15, 3.4) compared to white patients. Due to the dual nature of the kidneys’ involvement in metabolism via excretion and being the target organ for toxic insult, racial differences in exposure, coupled with the disproportionate rates of chronic kidney disease in Black adults, may be an explanation for the association between elevated Cu levels and eGFR in Black adults in this study. DISCUSSION/SIGNIFICANCE: Results from this study will provide insight into the prevalence of Cu and its association with kidney function in a bariatric population. Chronic kidney disease or other forms of renal impairment may result in the need for more conservative guidelines for dietary copper in bariatric medicine.
It is acknowledged that health technology assessment (HTA) is an inherently value-based activity that makes use of normative reasoning alongside empirical evidence. But the language used to conceptualise and articulate HTA's normative aspects is demonstrably unnuanced, imprecise, and inconsistently employed, undermining transparency and preventing proper scrutiny of the rationales on which decisions are based. This paper – developed through a cross-disciplinary collaboration of 24 researchers with expertise in healthcare priority-setting – seeks to address this problem by offering a clear definition of key terms and distinguishing between the types of normative commitment invoked during HTA, thus providing a novel conceptual framework for the articulation of reasoning. Through application to a hypothetical case, it is illustrated how this framework can operate as a practical tool through which HTA practitioners and policymakers can enhance the transparency and coherence of their decision-making, while enabling others to hold them more easily to account. The framework is offered as a starting point for further discussion amongst those with a desire to enhance the legitimacy and fairness of HTA by facilitating practical public reasoning, in which decisions are made on behalf of the public, in public view, through a chain of reasoning that withstands ethical scrutiny.
To assess whether measurement and feedback of chlorhexidine gluconate (CHG) skin concentrations can improve CHG bathing practice across multiple intensive care units (ICUs).
Design:
A before-and-after quality improvement study measuring patient CHG skin concentrations during 6 point-prevalence surveys (3 surveys each during baseline and intervention periods).
Setting:
The study was conducted across 7 geographically diverse ICUs with routine CHG bathing.
Participants:
Adult patients in the medical ICU.
Methods:
CHG skin concentrations were measured at the neck, axilla, and inguinal region using a semiquantitative colorimetric assay. Aggregate unit-level CHG skin concentration measurements from the baseline period and each intervention period survey were reported back to ICU leadership, which then used routine education and quality improvement activities to improve CHG bathing practice. We used multilevel linear models to assess the impact of intervention on CHG skin concentrations.
Results:
We enrolled 681 (93%) of 736 eligible patients; 92% received a CHG bath prior to survey. At baseline, CHG skin concentrations were lowest on the neck, compared to axillary or inguinal regions (P < .001). CHG was not detected on 33% of necks, 19% of axillae, and 18% of inguinal regions (P < .001 for differences in body sites). During the intervention period, ICUs that used CHG-impregnated cloths had a 3-fold increase in patient CHG skin concentrations as compared to baseline (P < .001).
Conclusions:
Routine CHG bathing performance in the ICU varied across multiple hospitals. Measurement and feedback of CHG skin concentrations can be an important tool to improve CHG bathing practice.
The White River Badlands (WRB) of South Dakota record eolian activity spanning the late Pleistocene through the latest Holocene (21 ka to modern), reflecting the effects of the last glacial period and Holocene climate fluctuations (Holocene Thermal Maximum, Medieval Climate Anomaly, and Little Ice Age). The WRB dune fields are important paleoclimate indicators in an area of the Great Plains with few climate proxies. The goal of this study is to use 1 m/pixel-resolution digital elevation models from drone imagery to distinguish Early to Middle Holocene parabolic dunes from Late Holocene parabolic dunes. Results indicate that relative ages of dunes are distinguished by slope and roughness (terrain ruggedness index). Morphological differences are attributed to postdepositional wind erosion, soil formation, and mass wasting. Early to Middle Holocene and Late Holocene paleowind directions, 324°± 13.1° (N = 7) and 323° ± 3.0° (N = 19), respectively, are similar to the modern wind regime. Results suggest significant landscape resilience to wind erosion, which resulted in preservation of a mosaic of Early and Late Holocene parabolic dunes. Quantification of dune characteristics will help refine the chronology of eolian activity in the WRB, provide insight into drought-driven landscape evolution, and integrate WRB eolian activity in a regional paleoenvironmental context.
Shallow firn cores, in addition to a near-basal ice core, were recovered in 2018 from the Quelccaya ice cap (5470 m a.s.l) in the Cordillera Vilcanota, Peru, and in 2017 from the Nevado Illimani glacier (6350 m a.s.l) in the Cordillera Real, Bolivia. The two sites are ~450 km apart. Despite meltwater percolation resulting from warming, particle-based trace element records (e.g. Fe, Mg, K) in the Quelccaya and Illimani shallow cores retain well-preserved signals. The firn core chronologies, established independently by annual layer counting, show a convincing overlap indicating the two records contain comparable signals and therefore capture similar regional scale climatology. Trace element records at a ~1–4 cm resolution provide past records of anthropogenic emissions, dust sources, volcanic emissions, evaporite salts and marine-sourced air masses. Using novel ultra-high-resolution (120 μm) laser technology, we identify annual layer thicknesses ranging from 0.3 to 0.8 cm in a section of 2000-year-old radiocarbon-dated near-basal ice which compared to the previous annual layer estimates suggests that Quelccaya ice cores drilled to bedrock may be older than previously suggested by depth-age models. With the information collected from this study in combination with past studies, we emphasize the importance of collecting new surface-to-bedrock ice cores from at least the Quelccaya ice cap, in particular, due to its projected disappearance as soon as the 2050s.
The coronavirus disease 2019 (COVID-19) pandemic rocked the world, spurring the collapse of national commerce, international trade, education, air travel, and tourism. The global economy has been brought to its knees by the rapid spread of infection, resulting in widespread illness and many deaths. The rise in nationalism and isolationism, ethnic strife, disingenuous governmental reporting, lockdowns, travel restrictions, and vaccination misinformation have caused further problems. This has brought into stark relief the need for improved disease surveillance and health protection measures. National and international agencies that should have provided earlier warning in fact failed to do so. A robust global health network that includes enhanced cooperation with Military Intelligence, Surveillance, and Reconnaissance (ISR) assets in conjunction with the existing international, governmental, and nongovernment medical intelligence networks and allies and partners would provide exceptional forward-looking and early-warning and is a proactive step toward making our future safe. This will be achieved both by surveilling populations for new biothreats, fusing and disseminating data, and then reaching out to target assistance to reduce disease spread in unprotected populations.
Pilot projects (“pilots”) are important for testing hypotheses in advance of investing more funds for full research studies. For some programs, such as Clinical and Translational Science Awards (CTSAs) supported by the National Center for Translational Sciences, pilots also make up a significant proportion of the research projects conducted with direct CTSA support. Unfortunately, administrative data on pilots are not typically captured in accessible databases. Though data on pilots are included in Research Performance Progress Reports, it is often difficult to extract, especially for large programs like the CTSAs where more than 600 pilots may be reported across all awardees annually. Data extraction challenges preclude analyses that could provide valuable information about pilots to researchers and administrators.
Methods:
To address those challenges, we describe a script that partially automates extraction of pilot data from CTSA research progress reports. After extraction of the pilot data, we use an established machine learning (ML) model to determine the scientific content of pilots for subsequent analysis. Analysis of ML-assigned scientific categories reveals the scientific diversity of the CTSA pilot portfolio and relationships among individual pilots and institutions.
Results:
The CTSA pilots are widely distributed across a number of scientific areas. Content analysis identifies similar projects and the degree of overlap for scientific interests among hubs.
Conclusion:
Our results demonstrate that pilot data remain challenging to extract but can provide useful information for communicating with stakeholders, administering pilot portfolios, and facilitating collaboration among researchers and hubs.
This article is a clinical guide which discusses the “state-of-the-art” usage of the classic monoamine oxidase inhibitor (MAOI) antidepressants (phenelzine, tranylcypromine, and isocarboxazid) in modern psychiatric practice. The guide is for all clinicians, including those who may not be experienced MAOI prescribers. It discusses indications, drug-drug interactions, side-effect management, and the safety of various augmentation strategies. There is a clear and broad consensus (more than 70 international expert endorsers), based on 6 decades of experience, for the recommendations herein exposited. They are based on empirical evidence and expert opinion—this guide is presented as a new specialist-consensus standard. The guide provides practical clinical advice, and is the basis for the rational use of these drugs, particularly because it improves and updates knowledge, and corrects the various misconceptions that have hitherto been prominent in the literature, partly due to insufficient knowledge of pharmacology. The guide suggests that MAOIs should always be considered in cases of treatment-resistant depression (including those melancholic in nature), and prior to electroconvulsive therapy—while taking into account of patient preference. In selected cases, they may be considered earlier in the treatment algorithm than has previously been customary, and should not be regarded as drugs of last resort; they may prove decisively effective when many other treatments have failed. The guide clarifies key points on the concomitant use of incorrectly proscribed drugs such as methylphenidate and some tricyclic antidepressants. It also illustrates the straightforward “bridging” methods that may be used to transition simply and safely from other antidepressants to MAOIs.
In this 2019 cross-sectional study, we analyzed hospital records for Medicaid beneficiaries who acquired nonventilator hospital-acquired pneumonia. The results suggest that preventive dental treatment in the 12 months prior or periodontal therapy in the 6 months prior to a hospitalization is associated with a reduced risk of NVHAP.
To assess coronavirus disease 2019 (COVID-19) infection policies at leading US medical centers in the context of the initial wave of the severe acute respiratory coronavirus virus 2 (SARS-CoV-2) omicron variant.
Design:
Electronic survey study eliciting hospital policies on masking, personal protective equipment, cohorting, airborne-infection isolation rooms (AIIRs), portable HEPA filters, and patient and employee testing.
Setting and participants:
“Hospital epidemiologists from U.S. News top 20 hospitals and 10 hospitals in the CDC Prevention Epicenters program.” As it is currently written, it implies all 30 hospitals are from the CDC Prevention Epicenters program, but that only applies to 10 hospitals. Alternatively, we could just say “Hospital epidemiologists from 30 leading US hospitals.”
Methods:
Survey results were reported using descriptive statistics.
Results:
Of 30 hospital epidemiologists surveyed, 23 (77%) completed the survey between February 15 and March 3, 2022. Among the responding hospitals, 18 (78%) used medical masks for universal masking and 5 (22%) used N95 respirators. 16 hospitals (70%) required universal eye protection. 22 hospitals (96%) used N95s for routine COVID-19 care and 1 (4%) reserved N95s for aerosol-generating procedures. 2 responding hospitals (9%) utilized dedicated COVID-19 wards; 8 (35%) used mixed COVID-19 and non–COVID-19 units; and 13 (57%) used both dedicated and mixed units. 4 hospitals (17%) used AIIRs for all COVID-19 patients, 10 (43%) prioritized AIIRs for aerosol-generating procedures, 3 (13%) used alternate risk-stratification criteria (not based on aerosol-generating procedures), and 6 (26%) did not routinely use AIIRs. 9 hospitals (39%) did not use portable HEPA filters, but 14 (61%) used them for various indications, most commonly as substitutes for AIIRs when unavailable or for specific high-risk areas or situations. 21 hospitals (91%) tested asymptomatic patients on admission, but postadmission testing strategies and preferred specimen sites varied substantially. 5 hospitals (22%) required regular testing of unvaccinated employees and 1 hospital (4%) reported mandatory weekly testing even for vaccinated employees during the SARS-CoV-2 omicron surge.
Conclusions:
COVID-19 infection control practices in leading hospitals vary substantially. Clearer public health guidance and transparency around hospital policies may facilitate more consistent national standards.
This chapter reviews collaborative argumentation, where a community of learners works together to advance the collective state of knowledge through debate, engagement, and dialogue. Engagement in collaborative argumentation can help students learn to think critically and independently about important issues and contested values. Students must externalize their ideas and metacognitively reflect on their developing understandings. This chapter summarizes the history of argumentation theory; how arguing can contribute to learning through making knowledge explicit, conceptual change, collaboration, and reasoning skills; how argumentation skill develops in childhood; and how argumentation varies in different cultural and social contexts. The chapter concludes by describing a variety of tools that scaffold effective argumentation, including through computer-mediated communication forums and argumentation maps.
Given the relatively small industry scale of cow-calf operations in New York to other regions of the country, little is known about differences in determinant values for feeder cattle. Using auction prices and quality characteristics over 7 years, differences in market, lot, and quality parameters suggest opportunities for improved marketing performance. A delta profit model is constructed to inform timing of marketing decisions for producers. The results indicate a relatively high potential for producers to increase farm returns by delaying sales of lighter-weight feeder cattle from the fall to spring auction months, given sufficient rates of gain and reasonable overwintering costs.
We interviewed 1,208 healthcare workers with positive SARS-CoV-2 tests between October 2020 and June 2021 to determine likely exposure sources. Overall, 689 (57.0%) had community exposures (479 from household members), 76 (6.3%) had hospital exposures (64 from other employees including 49 despite masking), 11 (0.9%) had community and hospital exposures, and 432 (35.8%) had no identifiable source of exposure.