We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
Quantum field theory predicts a nonlinear response of the vacuum to strong electromagnetic fields of macroscopic extent. This fundamental tenet has remained experimentally challenging and is yet to be tested in the laboratory. A particularly distinct signature of the resulting optical activity of the quantum vacuum is vacuum birefringence. This offers an excellent opportunity for a precision test of nonlinear quantum electrodynamics in an uncharted parameter regime. Recently, the operation of the high-intensity Relativistic Laser at the X-ray Free Electron Laser provided by the Helmholtz International Beamline for Extreme Fields has been inaugurated at the High Energy Density scientific instrument of the European X-ray Free Electron Laser. We make the case that this worldwide unique combination of an X-ray free-electron laser and an ultra-intense near-infrared laser together with recent advances in high-precision X-ray polarimetry, refinements of prospective discovery scenarios and progress in their accurate theoretical modelling have set the stage for performing an actual discovery experiment of quantum vacuum nonlinearity.
Several Elaeagnus species (autumn olive [Elaeagnus umbellata Thunb.], Russian olive [Elaeagnus angustifolia L.], and thorny olive [Elaeagnus pungens Thunb.]) are invasive in North America. Elaeagnus pungens is prevalent throughout much of the southeastern United States, commonly overtaking wooded and natural areas, bottomlands, and roadsides. While many management methods, including several herbicide treatments, have been evaluated, the efficacy of these methods can vary based on the size and density of the target plants. Further, personal communication with land managers revealed a lack of information that incorporated application effort, duration, and associated cost into treatment efficacy and usefulness. We evaluated three herbicide application methods using the free acid formulation of triclopyr in an E. pungens–infested forest in South Carolina, USA, to determine the effectiveness of each application method. We estimated pretreatment E. pungens biomass and destructively harvested all live material posttreatment to obtain actual biomass values. Foliar herbicide application was ineffective, but both cut stump and basal bark application nearly eliminated E. pungens in the treatment plots. The basal bark application took slightly more time to complete than cut stump treatments but was described as less physically demanding by applicators. Based on treatment efficacy and time required, the basal bark application method seems most prudent for controlling E. pungens in these areas. These results will help land managers more effectively use their resources for invasive woody plant control.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
The coronavirus disease 2019 (COVID-19) pandemic has placed significant burden on healthcare systems. We compared Clostridioides difficile infection (CDI) epidemiology before and during the pandemic across 71 hospitals participating in the Canadian Nosocomial Infection Surveillance Program. Using an interrupted time series analysis, we showed that CDI rates significantly increased during the COVID-19 pandemic.
To describe the genomic analysis and epidemiologic response related to a slow and prolonged methicillin-resistant Staphylococcus aureus (MRSA) outbreak.
Design:
Prospective observational study.
Setting:
Neonatal intensive care unit (NICU).
Methods:
We conducted an epidemiologic investigation of a NICU MRSA outbreak involving serial baby and staff screening to identify opportunities for decolonization. Whole-genome sequencing was performed on MRSA isolates.
Results:
A NICU with excellent hand hygiene compliance and longstanding minimal healthcare-associated infections experienced an MRSA outbreak involving 15 babies and 6 healthcare personnel (HCP). In total, 12 cases occurred slowly over a 1-year period (mean, 30.7 days apart) followed by 3 additional cases 7 months later. Multiple progressive infection prevention interventions were implemented, including contact precautions and cohorting of MRSA-positive babies, hand hygiene observers, enhanced environmental cleaning, screening of babies and staff, and decolonization of carriers. Only decolonization of HCP found to be persistent carriers of MRSA was successful in stopping transmission and ending the outbreak. Genomic analyses identified bidirectional transmission between babies and HCP during the outbreak.
Conclusions:
In comparison to fast outbreaks, outbreaks that are “slow and sustained” may be more common to units with strong existing infection prevention practices such that a series of breaches have to align to result in a case. We identified a slow outbreak that persisted among staff and babies and was only stopped by identifying and decolonizing persistent MRSA carriage among staff. A repeated decolonization regimen was successful in allowing previously persistent carriers to safely continue work duties.
Healthcare workers (HCWs) are a high-priority group for coronavirus disease 2019 (COVID-19) vaccination and serve as sources for public information. In this analysis, we assessed vaccine intentions, factors associated with intentions, and change in uptake over time in HCWs.
Methods:
A prospective cohort study of COVID-19 seroprevalence was conducted with HCWs in a large healthcare system in the Chicago area. Participants completed surveys from November 25, 2020, to January 9, 2021, and from April 24 to July 12, 2021, on COVID-19 exposures, diagnosis and symptoms, demographics, and vaccination status.
Results:
Of 4,180 HCWs who responded to a survey, 77.1% indicated that they intended to get the vaccine. In this group, 23.2% had already received at least 1 dose of the vaccine, 17.4% were unsure, and 5.5% reported that they would not get the vaccine. Factors associated with intention or vaccination were being exposed to clinical procedures (vs no procedures: adjusted odds ratio [AOR], 1.39; 95% confidence interval [CI], 1.16–1.65) and having a negative serology test for COVID-19 (vs no test: AOR, 1.46; 95% CI, 1.24–1.73). Nurses (vs physicians: AOR, 0.24; 95% CI, 0.17–0.33), non-Hispanic Black (vs Asians: AOR, 0.35; 95% CI, 0.21–0.59), and women (vs men: AOR, 0.38; 95% CI, 0.30–0.50) had lower odds of intention to get vaccinated. By 6-months follow-up, >90% of those who had previously been unsure were vaccinated, whereas 59.7% of those who previously reported no intention of getting vaccinated, were vaccinated.
Conclusions:
COVID-19 vaccination in HCWs was high, but variability in vaccination intention exists. Targeted messaging coupled with vaccine mandates can support uptake.
Cardiac intensivists frequently assess patient readiness to wean off mechanical ventilation with an extubation readiness trial despite it being no more effective than clinician judgement alone. We evaluated the utility of high-frequency physiologic data and machine learning for improving the prediction of extubation failure in children with cardiovascular disease.
Methods:
This was a retrospective analysis of clinical registry data and streamed physiologic extubation readiness trial data from one paediatric cardiac ICU (12/2016-3/2018). We analysed patients’ final extubation readiness trial. Machine learning methods (classification and regression tree, Boosting, Random Forest) were performed using clinical/demographic data, physiologic data, and both datasets. Extubation failure was defined as reintubation within 48 hrs. Classifier performance was assessed on prediction accuracy and area under the receiver operating characteristic curve.
Results:
Of 178 episodes, 11.2% (N = 20) failed extubation. Using clinical/demographic data, our machine learning methods identified variables such as age, weight, height, and ventilation duration as being important in predicting extubation failure. Best classifier performance with this data was Boosting (prediction accuracy: 0.88; area under the receiver operating characteristic curve: 0.74). Using physiologic data, our machine learning methods found oxygen saturation extremes and descriptors of dynamic compliance, central venous pressure, and heart/respiratory rate to be of importance. The best classifier in this setting was Random Forest (prediction accuracy: 0.89; area under the receiver operating characteristic curve: 0.75). Combining both datasets produced classifiers highlighting the importance of physiologic variables in determining extubation failure, though predictive performance was not improved.
Conclusion:
Physiologic variables not routinely scrutinised during extubation readiness trials were identified as potential extubation failure predictors. Larger analyses are necessary to investigate whether these markers can improve clinical decision-making.
Seed retention, and ultimately seed shatter, are extremely important for the efficacy of harvest weed seed control (HWSC) and are likely influenced by various agroecological and environmental factors. Field studies investigated seed-shattering phenology of 22 weed species across three soybean [Glycine max (L.) Merr.]-producing regions in the United States. We further evaluated the potential drivers of seed shatter in terms of weather conditions, growing degree days, and plant biomass. Based on the results, weather conditions had no consistent impact on weed seed shatter. However, there was a positive correlation between individual weed plant biomass and delayed weed seed–shattering rates during harvest. This work demonstrates that HWSC can potentially reduce weed seedbank inputs of plants that have escaped early-season management practices and retained seed through harvest. However, smaller individuals of plants within the same population that shatter seed before harvest pose a risk of escaping early-season management and HWSC.
We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$11–18 arcsec, resulting in a catalogue of $\sim$220 000 sources, of which $\sim$180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here.
To determine the changes in severe acute respiratory coronavirus virus 2 (SARS-CoV-2) serologic status and SARS-CoV-2 infection rates in healthcare workers (HCWs) over 6-months of follow-up.
Design:
Prospective cohort study.
Setting and participants:
HCWs in the Chicago area.
Methods:
Cohort participants were recruited in May and June 2020 for baseline serology testing (Abbott anti-nucleocapsid IgG) and were then invited for follow-up serology testing 6 months later. Participants completed monthly online surveys that assessed demographics, medical history, coronavirus disease 2019 (COVID-19), and exposures to SARS-CoV-2. The electronic medical record was used to identify SARS-CoV-2 polymerase chain reaction (PCR) positivity during follow-up. Serologic conversion and SARS-CoV-2 infection or possible reinfection rates (cases per 10,000 person days) by antibody status at baseline and follow-up were assessed.
Results:
In total, 6,510 HCWs were followed for a total of 1,285,395 person days (median follow-up, 216 days). For participants who had baseline and follow-up serology checked, 285 (6.1%) of the 4,681 seronegative participants at baseline seroconverted to positive at follow-up; 138 (48%) of the 263 who were seropositive at baseline were seronegative at follow-up. When analyzed by baseline serostatus alone, 519 (8.4%) of 6,194 baseline seronegative participants had a positive PCR after baseline serology testing (4.25 per 10,000 person days). Of 316 participants who were seropositive at baseline, 8 (2.5%) met criteria for possible SARS-CoV-2 reinfection (ie, PCR positive >90 days after baseline serology) during follow-up, a rate of 1.27 per 10,000 days at risk. The adjusted rate ratio for possible reinfection in baseline seropositive compared to infection in baseline seronegative participants was 0.26 (95% confidence interval, 0.13–0.53).
Conclusions:
Seropositivity in HCWs is associated with moderate protection from future SARS-CoV-2 infection.
Understanding risk factors for death from Covid-19 is key to providing good quality clinical care. We assessed the presenting characteristics of the ‘first wave’ of patients with Covid-19 at Royal Oldham Hospital, UK and undertook logistic regression modelling to investigate factors associated with death. Of 470 patients admitted, 169 (36%) died. The median age was 71 years (interquartile range 57–82), and 255 (54.3%) were men. The most common comorbidities were hypertension (n = 218, 46.4%), diabetes (n = 143, 30.4%) and chronic neurological disease (n = 123, 26.1%). The most frequent complications were acute kidney injury (AKI) (n = 157, 33.4%) and myocardial injury (n = 21, 4.5%). Forty-three (9.1%) patients required intubation and ventilation, and 39 (8.3%) received non-invasive ventilation. Independent risk factors for death were increasing age (odds ratio (OR) per 10 year increase above 40 years 1.87, 95% confidence interval (CI) 1.57–2.27), hypertension (OR 1.72, 95% CI 1.10–2.70), cancer (OR 2.20, 95% CI 1.27–3.81), platelets <150 × 103/μl (OR 1.93, 95% CI 1.13–3.30), C-reactive protein ≥100 μg/ml (OR 1.68, 95% CI 1.05–2.68), >50% chest radiograph infiltrates (OR 2.09, 95% CI 1.16–3.77) and AKI (OR 2.60, 95% CI 1.64–4.13). There was no independent association between death and gender, ethnicity, deprivation level, fever, SpO2/FiO2, lymphopoenia or other comorbidities. These findings will inform clinical and shared decision making, including use of respiratory support and therapeutic agents.
Gravitational waves from coalescing neutron stars encode information about nuclear matter at extreme densities, inaccessible by laboratory experiments. The late inspiral is influenced by the presence of tides, which depend on the neutron star equation of state. Neutron star mergers are expected to often produce rapidly rotating remnant neutron stars that emit gravitational waves. These will provide clues to the extremely hot post-merger environment. This signature of nuclear matter in gravitational waves contains most information in the 2–4 kHz frequency band, which is outside of the most sensitive band of current detectors. We present the design concept and science case for a Neutron Star Extreme Matter Observatory (NEMO): a gravitational-wave interferometer optimised to study nuclear physics with merging neutron stars. The concept uses high-circulating laser power, quantum squeezing, and a detector topology specifically designed to achieve the high-frequency sensitivity necessary to probe nuclear matter using gravitational waves. Above 1 kHz, the proposed strain sensitivity is comparable to full third-generation detectors at a fraction of the cost. Such sensitivity changes expected event rates for detection of post-merger remnants from approximately one per few decades with two A+ detectors to a few per year and potentially allow for the first gravitational-wave observations of supernovae, isolated neutron stars, and other exotica.
Potential effectiveness of harvest weed seed control (HWSC) systems depends upon seed shatter of the target weed species at crop maturity, enabling its collection and processing at crop harvest. However, seed retention likely is influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed-shatter phenology in 13 economically important broadleaf weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after physiological maturity at multiple sites spread across 14 states in the southern, northern, and mid-Atlantic United States. Greater proportions of seeds were retained by weeds in southern latitudes and shatter rate increased at northern latitudes. Amaranthus spp. seed shatter was low (0% to 2%), whereas shatter varied widely in common ragweed (Ambrosia artemisiifolia L.) (2% to 90%) over the weeks following soybean physiological maturity. Overall, the broadleaf species studied shattered less than 10% of their seeds by soybean harvest. Our results suggest that some of the broadleaf species with greater seed retention rates in the weeks following soybean physiological maturity may be good candidates for HWSC.
Seed shatter is an important weediness trait on which the efficacy of harvest weed seed control (HWSC) depends. The level of seed shatter in a species is likely influenced by agroecological and environmental factors. In 2016 and 2017, we assessed seed shatter of eight economically important grass weed species in soybean [Glycine max (L.) Merr.] from crop physiological maturity to 4 wk after maturity at multiple sites spread across 11 states in the southern, northern, and mid-Atlantic United States. From soybean maturity to 4 wk after maturity, cumulative percent seed shatter was lowest in the southern U.S. regions and increased moving north through the states. At soybean maturity, the percent of seed shatter ranged from 1% to 70%. That range had shifted to 5% to 100% (mean: 42%) by 25 d after soybean maturity. There were considerable differences in seed-shatter onset and rate of progression between sites and years in some species that could impact their susceptibility to HWSC. Our results suggest that many summer annual grass species are likely not ideal candidates for HWSC, although HWSC could substantially reduce their seed output during certain years.
The unprecedented growth, availability and accessibility of sophisticated image analysis algorithms and powerful computational resources led to the idea of developing web-based computational infrastructures that could meet users’ new requirements. On the other hand the gap between the pace of data generation and the capability to extract clinically or scientifically relevant information is rapidly widening.
Integration of the power of sophisticated mathematical models, efficient computational algorithms and advanced hardware infrastructure provides the necessary sensitivity to detect, extract and analyze subtle, dynamic and distributed patterns distinguishing one brain from another, and a diseased brain from a normal brain.
neuGRID is the leading e-Infrastructure where neuroscientists can find core services and resources for brain image analysis. The neuGRID platform makes use of grid services and computing, and was developed with the final aim of overcoming the hurdles that the average scientist meets when trying to set up advanced experiments in computational neuroimaging, thereby empowering a larger base of scientists. Although originally built for neuroscientists working in the field of AD, the infrastructure is designed to be expandable to services from other medical fields (e.g. multiple sclerosis, psychiatric conditions).
“neuGRID for Users” will provide an e-Science environment by further developing and deploying the neuGRID infrastructure to deliver a Virtual Laboratory offering neuroscientists access to a wide range of datasets and algorithm pipelines, access to computational resources, services, and support. Information from this abstract is intended to make aware researchers working with neuroimaging of all possibilities when it comes to resources.
Small mountain glaciers are an important part of the cryosphere and tend to respond rapidly to climate warming. Historically, mapping very small glaciers (generally considered to be <0.5 km2) using satellite imagery has often been subjective due to the difficulty in differentiating them from perennial snowpatches. For this reason, most scientists implement minimum size-thresholds (typically 0.01–0.05 km2). Here, we compare the ability of different remote-sensing approaches to identify and map very small glaciers on imagery of varying spatial resolutions (30–0.25 m) and investigate how operator subjectivity influences the results. Based on this analysis, we support the use of a minimum size-threshold of 0.01 km2 for imagery with coarse to medium spatial resolution (30–10 m). However, when mapping on high-resolution imagery (<1 m) with minimal seasonal snow cover, glaciers <0.05 km2 and even <0.01 km2 are readily identifiable and using a minimum threshold may be inappropriate. For these cases, we develop a set of criteria to enable the identification of very small glaciers and classify them as certain, probable or possible. This should facilitate a more consistent approach to identifying and mapping very small glaciers on high-resolution imagery, helping to produce more comprehensive and accurate glacier inventories.
OBJECTIVES/SPECIFIC AIMS: The purpose of the present secondary data analysis was to examine the effect of moderate-severe disturbed sleep before the start of radiation therapy (RT) on subsequent RT-induced pain. METHODS/STUDY POPULATION: Analyses were performed on 676 RT-naïve breast cancer patients (mean age 58, 100% female) scheduled to receive RT from a previously completed nationwide, multicenter, phase II randomized controlled trial examining the efficacy of oral curcumin on radiation dermatitis severity. The trial was conducted at 21 community oncology practices throughout the US affiliated with the University of Rochester Cancer Center NCI’s Community Oncology Research Program (URCC NCORP) Research Base. Sleep disturbance was assessed using a single item question from the modified MD Anderson Symptom Inventory (SI) on a 0–10 scale, with higher scores indicating greater sleep disturbance. Total subjective pain as well as the subdomains of pain (sensory, affective, and perceived) were assessed by the short-form McGill Pain Questionnaire. Pain at treatment site (pain-Tx) was also assessed using a single item question from the SI. These assessments were included for pre-RT (baseline) and post-RT. For the present analyses, patients were dichotomized into 2 groups: those who had moderate-severe disturbed sleep at baseline (score≥4 on the SI; n=101) Versus those who had mild or no disturbed sleep (control group; score=0–3 on the SI; n=575). RESULTS/ANTICIPATED RESULTS: Prior to the start of RT, breast cancer patients with moderate-severe disturbed sleep at baseline were younger, less likely to have had lumpectomy or partial mastectomy while more likely to have had total mastectomy and chemotherapy, more likely to be on sleep, anti-anxiety/depression, and prescription pain medications, and more likely to suffer from depression or anxiety disorder than the control group (all p’s≤0.02). Spearman rank correlations showed that changes in sleep disturbance from baseline to post-RT were significantly correlated with concurrent changes in total pain (r=0.38; p<0.001), sensory pain (r=0.35; p<0.001), affective pain (r=0.21; p<0.001), perceived pain intensity (r=0.37; p<0.001), and pain-Tx (r=0.35; p<0.001). In total, 92% of patients with moderate-severe disturbed sleep at baseline reported post-RT total pain compared with 79% of patients in the control group (p=0.006). Generalized linear estimating equations, after controlling for baseline pain and other covariates (baseline fatigue and distress, age, sleep medications, anti-anxiety/depression medications, prescription pain medications, and depression or anxiety disorder), showed that patients with moderate-severe disturbed sleep at baseline had significantly higher mean values of post-RT total pain (by 39%; p=0.033), post-RT sensory pain (by 41%; p=0.046), and post-RT affective pain (by 55%; p=0.035) than the control group. Perceived pain intensity (p=0.066) and pain-Tx (p=0.086) at post-RT were not significantly different between the 2 groups. DISCUSSION/SIGNIFICANCE OF IMPACT: These findings suggest that moderate-severe disturbed sleep prior to RT is an important predictor for worsening of pain at post-RT in breast cancer patients. There could be several plausible reasons for this. Sleep disturbance, such as sleep loss and sleep continuity disturbance, could result in impaired sleep related recovery and repair of tissue damage associated with cancer and its treatment; thus, resulting in the amplification of pain. Sleep disturbance may also reduce pain tolerance threshold through increased sensitization of the central nervous system. In addition, pain and sleep disturbance may share common neuroimmunological pathways. Sleep disturbance may modulate inflammation, which in turn may contribute to increased pain. Further research is needed to confirm these findings and whether interventions targeting sleep disturbance in early phase could be potential alternate approaches to reduce pain after RT.