We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The first book of its kind, Less than Victory explores both the impact the Vietnam War had on American Catholics, and the impact of the nation's largest religious group upon its most controversial war. Through the 1960s, Roman Catholics made up one-quarter of the population, and were deeply involved in all aspects of war. In this book, Steven J. Brady argues that American Catholics introduced the moral, as opposed to the prudential, argument about the war earlier and more comprehensively than other groups. The Catholic debate on morality was three cornered: some saw the war as inherently immoral, others as morally obligatory, while others focused on the morality of the means – napalm, torture, and free-fire zones – that the US and the Army of the Republic of Vietnam were employing. These debates presaged greater Catholic involvement in war and peace issues, provoking a shift away from traditional ideas of a just war across American Catholic thinking and dialogue.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
Current evidence underscores a need to transform how we do clinical research, shifting from academic-driven priorities to co-led community partnership focused programs, accessible and relevant career pathway programs that expand opportunities for career development, and design of trainings and practices to develop cultural competence among research teams. Failures of equitable research translation contribute to health disparities. Drivers of this failed translation include lack of diversity in both researchers and participants, lack of alignment between research institutions and the communities they serve, and lack of attention to structural sources of inequity and drivers of mistrust for science and research. The Duke University Research Equity and Diversity Initiative (READI) is a program designed to better align clinical research programs with community health priorities through community engagement. Organized around three specific aims, READI-supported programs targeting increased workforce diversity, workforce training in community engagement and cultural competence, inclusive research engagement principles, and development of trustworthy partnerships.
The quality of news reports about suicide can influence suicide rates. Although many researchers have aimed to assess the general safety of news reporting in terms of adherence to responsible media guidelines, none have focused on major US cable networks, a key source of public information in North America and beyond.
Aims
To characterise and compare suicide-related reporting by major US cable television news networks across the ideological spectrum.
Method
We searched a news archive (Factiva) for suicide-related transcripts from ‘the big three’ US cable television news networks (CNN, Fox News and MSNBC) over an 11-year inclusion interval (2012–2022). We included and coded segments with a major focus on suicide (death, attempt and/or thoughts) for general content, putatively harmful and protective characteristics and overarching narratives. We used chi-square tests to compare these variables across networks.
Results
We identified 612 unique suicide-related segments (CNN, 398; Fox News, 119; MSNBC, 95). Across all networks, these segments tended to focus on suicide death (72–89%) and presented stories about specific individuals (61–87%). Multiple putatively harmful characteristics were evident in segments across networks, including mention of a suicide method (42–52%) – with hanging (15–30%) and firearm use (12–20%) the most commonly mentioned – and stigmatising language (39–43%). Only 15 segments (2%) presented a story of survival.
Conclusions
Coverage of suicide stories by major US cable news networks was often inconsistent with responsible reporting guidelines. Further engagement with networks and journalists is thus warranted.
Almost 12 % of the human population have insufficient access to food and hence are at risk from nutrient deficiencies and related conditions, such as anaemia and stunting. Ruminant meat and milk are rich in protein and micronutrients, making them a highly nutritious food source for human consumption. Conversely, ruminant production contributes to methane (CH4) emissions, a greenhouse gas (GHG) with a global warming potential (GWP) 27–30 times greater than that of carbon dioxide (CO2). Nonetheless, ruminant production plays a crucial role in the circular bioeconomy in terms of upcycling agricultural products that cannot be consumed by humans, into valuable and nutritional food, whilst delivering important ecosystem services. Taking on board the complexities of ruminant production and the need to improve both human and planetary health, there is increasing emphasis on developing innovative solutions to achieve sustainable ruminant production within the ‘One Health’ framework. Specifically, research and innovation will undoubtedly continue to focus on (1) Genetics and Breeding; (2) Animal nutrition and (3) Animal Health, to achieve food security and human health, whilst limiting environmental impact. Implementation of resultant innovations within the agri-food sector will require several enablers, including large-scale investment, multi-actor partnerships, scaling, regulatory approval and importantly social acceptability. This review outlines the grand challenges of achieving sustainable ruminant production and likely research and innovation landscape over the next 15 years and beyond, specifically outlining the pathways and enablers required to achieve sustainable ruminant production within the One Health framework.
Female genital schistosomiasis (FGS) is a chronic disease manifestation of the waterborne parasitic infection Schistosoma haematobium that affects up to 56 million women and girls, predominantly in sub-Saharan Africa. Starting from early childhood, this stigmatizing gynaecological condition is caused by the presence of Schistosoma eggs and associated toxins within the genital tract. Schistosoma haematobium typically causes debilitating urogenital symptoms, mostly as a consequence of inflammation, which includes bleeding, discharge and lower abdominal pelvic pain. Chronic complications of FGS include adverse sexual and reproductive health and rights outcomes such as infertility, ectopic pregnancy and miscarriage. FGS is associated with prevalent human immunodeficiency virus and may increase the susceptibility of women to high-risk human papillomavirus infection. Across SSA, and even in clinics outside endemic areas, the lack of awareness and available resources among both healthcare professionals and the public means FGS is underreported, misdiagnosed and inadequately treated. Several studies have highlighted research needs and priorities in FGS, including better training, accessible and accurate diagnostic tools, and treatment guidelines. On 6 September, 2024, LifeArc, the Global Schistosomiasis Alliance and partners from the BILGENSA Research Network (Genital Bilharzia in Southern Africa) convened a consultative, collaborative and translational workshop: ‘Female Genital Schistosomiasis: Translational Challenges and Opportunities’. Its ambition was to identify practical solutions that could address these research needs and drive appropriate actions towards progress in tackling FGS. Here, we present the outcomes of that workshop – a series of discrete translational actions to better galvanize the community and research funders.
Older adults with treatment-resistant depression (TRD) benefit more from treatment augmentation than switching. It is useful to identify moderators that influence these treatment strategies for personalised medicine.
Aims
Our objective was to test whether age, executive dysfunction, comorbid medical burden, comorbid anxiety or the number of previous adequate antidepressant trials could moderate the superiority of augmentation over switching. A significant moderator would influence the differential effect of augmentation versus switching on treatment outcomes.
Method
We performed a preplanned moderation analysis of data from the Optimizing Outcomes of Treatment-Resistant Depression in Older Adults (OPTIMUM) randomised controlled trial (N = 742). Participants were 60 years old or older with TRD. Participants were either (a) randomised to antidepressant augmentation with aripiprazole (2.5–15 mg), bupropion (150–450 mg) or lithium (target serum drug level 0.6 mmol/L) or (b) switched to bupropion (150–450 mg) or nortriptyline (target serum drug level 80–120 ng/mL). Treatment duration was 10 weeks. The two main outcomes of this analysis were (a) symptom improvement, defined as change in Montgomery–Asberg Depression Rating Scale (MADRS) scores from baseline to week 10 and (b) remission, defined as MADRS score of 10 or less at week 10.
Results
Of the 742 participants, 480 were randomised to augmentation and 262 to switching. The number of adequate previous antidepressant trials was a significant moderator of depression symptom improvement (b = −1.6, t = −2.1, P = 0.033, 95% CI [−3.0, −0.1], where b is the coefficient of the relationship (i.e. effect size), and t is the t-statistic for that coefficient associated with the P-value). The effect was similar across all augmentation strategies. No other putative moderators were significant.
Conclusions
Augmenting was superior to switching antidepressants only in older patients with fewer than three previous antidepressant trials. This suggests that other intervention strategies should be considered following three or more trials.
Biostatisticians increasingly use large language models (LLMs) to enhance efficiency, yet practical guidance on responsible integration is limited. This study explores current LLM usage, challenges, and training needs to support biostatisticians.
Methods:
A cross-sectional survey was conducted across three biostatistics units at two academic medical centers. The survey assessed LLM usage across three key professional activities: communication and leadership, clinical and domain knowledge, and quantitative expertise. Responses were analyzed using descriptive statistics, while free-text responses underwent thematic analysis.
Results:
Of 208 eligible biostatisticians (162 staff and 46 faculty), 69 (33.2%) responded. Among them, 44 (63.8%) reported using LLMs; of the 43 who answered the frequency question, 20 (46.5%) used them daily and 16 (37.2%) weekly. LLMs improved productivity in coding, writing, and literature review; however, 29 of 41 respondents (70.7%) reported significant errors, including incorrect code, statistical misinterpretations, and hallucinated functions. Key verification strategies included expertise, external validation, debugging, and manual inspection. Among 58 respondents providing training feedback, 44 (75.9%) requested case studies, 40 (69.0%) sought interactive tutorials, and 37 (63.8%) desired structured training.
Conclusions:
LLM usage is notable among respondents at two academic medical centers, though response patterns likely reflect early adopters. While LLMs enhance productivity, challenges like errors and reliability concerns highlight the need for verification strategies and systematic validation. The strong interest in training underscores the need for structured guidance. As an initial step, we propose eight core principles for responsible LLM integration, offering a preliminary framework for structured usage, validation, and ethical considerations.
It remains unclear which individuals with subthreshold depression benefit most from psychological intervention, and what long-term effects this has on symptom deterioration, response and remission.
Aims
To synthesise psychological intervention benefits in adults with subthreshold depression up to 2 years, and explore participant-level effect-modifiers.
Method
Randomised trials comparing psychological intervention with inactive control were identified via systematic search. Authors were contacted to obtain individual participant data (IPD), analysed using Bayesian one-stage meta-analysis. Treatment–covariate interactions were added to examine moderators. Hierarchical-additive models were used to explore treatment benefits conditional on baseline Patient Health Questionnaire 9 (PHQ-9) values.
Results
IPD of 10 671 individuals (50 studies) could be included. We found significant effects on depressive symptom severity up to 12 months (standardised mean-difference [s.m.d.] = −0.48 to −0.27). Effects could not be ascertained up to 24 months (s.m.d. = −0.18). Similar findings emerged for 50% symptom reduction (relative risk = 1.27–2.79), reliable improvement (relative risk = 1.38–3.17), deterioration (relative risk = 0.67–0.54) and close-to-symptom-free status (relative risk = 1.41–2.80). Among participant-level moderators, only initial depression and anxiety severity were highly credible (P > 0.99). Predicted treatment benefits decreased with lower symptom severity but remained minimally important even for very mild symptoms (s.m.d. = −0.33 for PHQ-9 = 5).
Conclusions
Psychological intervention reduces the symptom burden in individuals with subthreshold depression up to 1 year, and protects against symptom deterioration. Benefits up to 2 years are less certain. We find strong support for intervention in subthreshold depression, particularly with PHQ-9 scores ≥ 10. For very mild symptoms, scalable treatments could be an attractive option.
Objectives/Goals: Monensin is FDA approved for use in veterinary medicine. Recent studies pointed to its potent anticancer activity. Since de novo drug discovery process typically takes 10 to 15 years and requires an investment of approximately $1.3 to $3 billion, drug repositioning can bypass several steps in this process and increase the potential for success. Methods/Study Population: Cell viability assays were conducted on human MDA-MB-231, MDA-MB-468, and MCF10A breast cancer cell lines and mouse EO771 and 4T1 breast cancer cell lines. MDA-MB-231 cell line was used in all the studies unless specified otherwise. Time course levels of Bcl-2, Bak, p62, and LC3II were assessed via Western blotting with GAPDH as a loading control. Proteomics analysis was conducted by the IDEA National Resource for Quantitative Proteomics. Time course levels of major histocompatibility complex (MHC) I and II and calreticulin were evaluated using flow cytometry. At least three biological replicates have been conducted for each experiment. Results/Anticipated Results: Monensin and several of its novel analogs were potent toward human and mouse breast cancer cell lines. Furthermore, they induced apoptotic cell death as evidenced by Annexin V/PI assay, downregulation of Bcl-2, and upregulation of Bak in MDA-MB-231 cells. Proteomics analysis revealed that several molecular pathways related to MHC class I and II antigen presentation were significantly altered following treatment with these compounds. Additionally, monensin and its analogs significantly increased the expression of MHC class I and II. Our studies also showed that monensin and its analogs increase the surface calreticulin levels. Treatment of MDA-MB-231 cells with these compounds also resulted in an increase in p62 and LC3II expression, suggesting a disruption of the autophagic process. Discussion/Significance of Impact: These results suggest that monensin and its analogs not only exhibit anti-breast cancer cell activity but also modulate immune-related pathways. By disrupting autophagy and enhancing calreticulin levels, these compounds may potentiate antitumor immune responses, providing a promising avenue for drug repositioning in cancer therapy.
Objectives/Goals: Manual skin assessment in chronic graft-versus-host disease (cGVHD) can be time consuming and inconsistent (>20% affected area) even for experts. Building on previous work we explore methods to use unmarked photos to train artificial intelligence (AI) models, aiming to improve performance by expanding and diversifying the training data without additional burden on experts. Methods/Study Population: Common to many medical imaging projects, we have a small number of expert-marked patient photos (N = 36, n = 360), and many unmarked photos (N = 337, n = 25,842). Dark skin (Fitzpatrick type 4+) is underrepresented in both sets; 11% of patients in the marked set and 9% in the unmarked set. In addition, a set of 20 expert-marked photos from 20 patients were withheld from training to assess model performance, with 20% dark skin type. Our gold standard markings were manual contours around affected skin by a trained expert. Three AI training methods were tested. Our established baseline uses only the small number of marked photos (supervised method). The semi-supervised method uses a mix of marked and unmarked photos with human feedback. The self-supervised method uses only unmarked photos without any human feedback. Results/Anticipated Results: We evaluated performance by comparing predicted skin areas with expert markings. The error was given by the absolute difference between the percentage areas marked by the AI model and expert, where lower is better. Across all test patients, the median error was 19% (interquartile range 6 – 34) for the supervised method and 10% (5 – 23) for the semi-supervised method, which incorporated unmarked photos from 83 patients. On dark skin types, the median error was 36% (18 – 62) for supervised and 28% (14 – 52) for semi-supervised, compared to a median error on light skin of 18% (5 – 26) for supervised and 7% (4 – 17) for semi-supervised. Self-supervised, using all 337 unmarked patients, is expected to further improve performance and consistency due to increased data diversity. Full results will be presented at the meeting. Discussion/Significance of Impact: By automating skin assessment for cGVHD, AI could improve accuracy and consistency compared to manual methods. If translated to clinical use, this would ease clinical burden and scale to large patient cohorts. Future work will focus on ensuring equitable performance across all skin types, providing fair and accurate assessments for every patient.
Objectives/Goals: Osteoarthritis (OA) is a multifactorial disease where sustained gut inflammation is a continued source of inflammatory mediators driving degenerative processes in joints. The goal was to use spontaneous equine model to compare fecal and leukocyte microbiome and correlation to transcriptome in OA. Methods/Study Population: Seventy-six horses (31 OA, 45 controls) were enrolled by population-based sampling. Feces and peripheral blood mononuclear cells (PBMC) were collected. Horses were determined to have OA by clinical and radiographic evidence. Horses were excluded if they received medications or joint injections within two months. Fecal and circulating leukocyte bacterial microbial 16s-seq was performed. Bulk RNAseq of PBMC was performed by the Illumina platform. Gene expression data were mapped to the equine genome, and differential expression analysis was performed with DESeq2. Qiime2 was used for microbial analysis. Enrichment analysis was performed with a cluster profiler. Correlation analyses were performed between the datasets. Results/Anticipated Results: Beta and alpha microbial diversity differed in feces and PBMC of OA vs. healthy horses. Horses with OA had an increased Firmicutes to Bacteroidetes ratio compared with controls. The fecal microbiome of OA horses had significantly higher amounts of Firmicutes Oribacterium (q Discussion/Significance of Impact: These data suggest that altered microbiome and PBMC gene expression are associated with naturally occurring OA in the translational equine model. While Oribacterium has been detected in humans with rheumatoid arthritis, its role in OA warrants further proteomic and metabolomic profiling.
Objectives/Goals: Lung transplant is a life-saving surgery for patients with advanced lung diseases yet long-term survival remains poor. The clinical features and lung injury patterns of lung transplant recipients who die early versus those who survive longer term remain undefined. Here, we use cell-free DNA and rejection parameters to help elucidate this further. Methods/Study Population: Lung transplant candidacy prioritizes patients who have a high mortality risk within 2 years and will likely survive beyond 5 years. We stratified patients who died within 2 years of transplant as early death (n = 50) and those who survived past 5 years as long-term survivors (n = 53). Lung transplant recipients had serial blood collected as part of two prospective cohort studies. Cell-free DNA (cfDNA) was quantified using relative (% donor-derived cfDNA {%ddcfDNA}) and absolute (nuclear-derived {n-cfDNA}, mitochondrial-derived {mt-cfDNA}) measurements. As part of routine posttransplant clinical care, all patients underwent pulmonary function testing (PFT), surveillance bronchoscopy with bronchoalveolar lavage (BAL), transbronchial biopsy (TBBx), and donor-specific antibody testing (DSA). Results/Anticipated Results: Over the first 2 years after transplant, the number of episodes of antibody-mediated rejection (p) Discussion/Significance of Impact: Clinically, early-death patients perform worse on routine surveillance PFTs and experience a worse degree of CLAD. These patients also have higher levels of cfDNA as quantified by n-cfDNA and mt-cfDNA. These results provide preliminary evidence that early-death patients have worse allograft rejection, dysfunction, and molecular injury.
Powered equipment for patient handling was designed to alleviate Emergency Medical Service (EMS) clinician injuries while lifting patients. This project evaluated the organizational rationale for purchasing powered equipment and the outcomes from equipment use.
Methods:
This project analyzed secondary data obtained via an insurance Safety Intervention Grant (SIG) program in Ohio USA. These data were primarily in reports from EMS organizations. Investigators applied a mixed-methods approach, analyzing quantitative data from 297 grants and qualitative data from a sample of 64 grants. Analysts abstracted data related to: work-related injuries or risk of musculoskeletal-disorders (MSD), employee feedback regarding acceptance or rejection, and impact on quality, productivity, staffing, and cost.
Results:
A total of $16.67 million (2018 adjusted USD) was spent from 2005 through 2018 for powered cots, powered loading systems, powered stair chairs, and non-patient handling equipment (eg, chest compression system, powered roller). Organizations purchased equipment to accommodate staff demographics (height, age, sex) and patient characteristics (weight, impairments). Grantees were fire departments (n = 254) and public (n = 19) and private (n = 24) EMS organizations consisting of career (45%), volunteer (20%), and a combination of career and volunteer (35%) staff. Powered equipment reduced reported musculoskeletal injuries, and organizations reported it improved EMS clinicians’ safety. Organization feedback was mostly positive, and no organization indicated outright rejection of the purchased equipment. Analyst-identified design advantages for powered cots included increased patient weight capacity and hydraulic features, but the greater weight of the powered cot was a disadvantage. The locking mechanism to hold the cot during transportation was reported as an advantage, but it was a disadvantage for older cots without a compatibility conversion kit. Around one-half of organizations described a positive impact on quality of care and patient safety resulting from the new equipment.
Conclusion:
Overall, organizations reported improved EMS clinicians’ safety but noted that not all safety concerns were addressed by the new equipment.
Since the 1950s, the United Nations (UN) has designated days (e.g., World Wetland Day), years (e.g., Year of the Gorilla) and decades (e.g., Decade on Biodiversity) with a commonly stated goal to raise awareness and funding for conservation-oriented initiatives, and these Days, Years and Decades of ‘…’ (hereafter ‘DYDOs’) continue. However, the effectiveness of these initiatives to achieve their stated objectives and to contribute to positive conservation outcomes is unclear. Here we used a binary analysis change model to evaluate the effectiveness of UN conservation-oriented DYDOs observed between 1974 and 2020. We also examined four case studies to understand the different strategies employed to meet specified conservation goals. We found that DYDOs apparently contributed to positive conservation outcomes when they were tied to social media campaigns and/or when they were strategically situated in current events or global discourse. Although the outcomes of DYDOs were varied, those with longer timescales and those that engaged local communities were more likely to be successful. We suggest that DYDO organizers should identify all possible paths of action through the lens of the change model outlined in this paper to strengthen the value and effectiveness of these initiatives in the future. Using this approach could help ensure that resources are used efficiently and effectively, and that initiatives yield positive conservation outcomes that benefit people and nature.
Maladaptive daydreaming is a distinct syndrome in which the main symptom is excessive vivid fantasising that causes clinically significant distress and functional impairment in academic, vocational and social domains. Unlike normal daydreaming, maladaptive daydreaming is persistent, compulsive and detrimental to one’s life. It involves detachment from reality in favour of intense emotional engagement with alternative realities and often includes specific features such as psychomotor stereotypies (e.g. pacing in circles, jumping or shaking one’s hands), mouthing dialogues, facial gestures or enacting fantasy events. Comorbidity is common, but existing disorders do not account for the phenomenology of the symptoms. Whereas non-specific therapy is ineffective, targeted treatment seems promising. Thus, we propose that maladaptive daydreaming be considered a formal syndrome in psychiatric taxonomies, positioned within the dissociative disorders category. Maladaptive daydreaming satisfactorily meets criteria for conceptualisation as a psychiatric syndrome, including reliable discrimination from other disorders and solid interrater agreement. It involves significant dissociative aspects, such as disconnection from perception, behaviour and sense of self, and has some commonalities with but is not subsumed under existing dissociative disorders. Formal recognition of maladaptive daydreaming as a dissociative disorder will encourage awareness of a growing problem and spur theoretical, research and clinical developments.
Beliefs about other players’ strategies are crucial in determining outcomes for coordination games. If players are to coordinate on an efficient equilibrium, they must believe that others will coordinate with them. In many settings there is uncertainty about beliefs as well as strategies. Do people consider these “higher-order” beliefs (beliefs about beliefs) when making coordination decisions? I design a modified stag hunt experiment that allows me to identify how these higher-order beliefs and uncertainty about higher-order beliefs matter for coordination. Players prefer to invest especially when they believe that others are “optimistic” that they will invest; but knowledge that others think them unlikely to invest does not cause players to behave differently than when they do not know what their partners think about them. Thus resolving uncertainty about beliefs can result in marked efficiency gains.
We replicate an influential study of monetary incentive effects by Jamal and Sunder (1991) to illustrate the difficulties of drawing causal inferences from a treatment manipulation when other features of the experimental design vary simultaneously. We first show that the Jamal and Sunder (1991) conclusions hinge on one of their laboratory market sessions, conducted only within their fixed-pay condition, that is characterized by a thin market and asymmetric supply and demand curves. When we replicate this structure multiple times under both fixed pay and pay tied to performance, our findings do not support Jamal and Sunder's (1991) conclusion about the incremental effects of performance-based compensation, suggesting that other features varied in that study likely account for their observed difference. Our ceteris paribus replication leaves us unable to offer any generalized conclusions about the effects of monetary incentives in other market structures, but the broader point is to illustrate that experimental designs that attempt to generalize effects by varying multiple features simultaneously can jeopardize the ability to draw causal inferences about the primary treatment manipulation.