We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Edmonton-based mobile stroke unit (MSU), which transports patients to the University of Alberta Hospital (UAH), enrolled patients in the Intravenous Tenecteplase Compared with Alteplase for Acute Ischemic Stroke (AcT) trial. We examined the feasibility of trial enrollment in MSU, its impact on acute stroke workflow metrics and functional outcomes at 90–120 days.
Methods:
In this post hoc analysis, patients were divided into three groups based on enrollment site: MSU (n = 43), UAH (n = 273) and non-UAH (n = 1261). All patients were enrolled with a deferred consent process. The primary outcome for this analysis was the feasibility of enrollment defined as the proportion of patients receiving intravenous thrombolysis (IVT) during the study period who were enrolled in the trial. Multiple linear and binary logistic regression was used to evaluate the adjusted effect of the study groups on acute stroke workflow metrics and functional outcomes at 90–120 days.
Results:
100% of eligible IVT-treated patients in the MSU during the study period were enrolled in the AcT trial. Covariate-adjusted linear regression showed shorter door-to-needle (17.2 [9.7–24.6] min) and CT-to-needle (10.7 [4.2–17.1] min) times in the MSU compared to UAH and non-UAH sites. There was no difference in the proportion of patients with an excellent functional outcome (mRS 0–1) at 90–120 days or symptomatic intracerebral hemorrhage (ICH) at 24 hours between groups.
Conclusions:
Enrollment in the AcT trial from the MSU was feasible. MSU-enrolled patients demonstrated faster door-to-needle and CT-to-needle times, resulting in earlier IVT administration and similar rates of symptomatic ICH.
Clinical trials often struggle to recruit enough participants, with only 10% of eligible patients enrolling. This is concerning for conditions like stroke, where timely decision-making is crucial. Frontline clinicians typically screen patients manually, but this approach can be overwhelming and lead to many eligible patients being overlooked.
Methods:
To address the problem of efficient and inclusive screening for trials, we developed a matching algorithm using imaging and clinical variables gathered as part of the AcT trial (NCT03889249) to automatically screen patients by matching these variables with the trials’ inclusion and exclusion criteria using rule-based logic. We then used the algorithm to identify patients who could have been enrolled in six trials: EASI-TOC (NCT04261478), CATIS-ICAD (NCT04142125), CONVINCE (NCT02898610), TEMPO-2 (NCT02398656), ESCAPE-MEVO (NCT05151172), and ENDOLOW (NCT04167527). To evaluate our algorithm, we compared our findings to the number of enrollments achieved without using a matching algorithm. The algorithm’s performance was validated by comparing results with ground truth from a manual review of two clinicians. The algorithm’s ability to reduce screening time was assessed by comparing it with the average time used by study clinicians.
Results:
The algorithm identified more potentially eligible study candidates than the number of participants enrolled. It also showed over 90% sensitivity and specificity for all trials, and reducing screening time by over 100-fold.
Conclusions:
Automated matching algorithms can help clinicians quickly identify eligible patients and reduce resources needed for enrolment. Additionally, the algorithm can be modified for use in other trials and diseases.
The presence of an intraluminal thrombus in acutely symptomatic carotid stenosis is thought to represent a high-risk lesion for short-term stroke reccurrence though evidence on natural history and treatment is lacking, leading to equipoise and much variation in practice. The objective of this study was to map these variations in practice (medical management and timing of revascularization), determine the considerations that influence clinician decision-making in this condition and gather opinions that inform the development and design of future trials in the area.
Methods:
This was a mixed-methods study using both quantitative survey methods and qualitative interview-based methods. International perspectives were gathered by distributing a case-based survey via the “Practice Current” section of Neurology: Clinical Practice and interviewing international experts using established qualitative research methods.
Results:
The presence of an intraluminal thrombus significantly increased the likelihood of using a regimen containing anticoagulation agents (p < 0.001) in acutely symptomatic carotid stenosis in the case-based survey. Themes that emerged from qualitative interview analysis were therapeutic uncertainty regarding anticoagulation, decision to reimage, revascularization choices and future trial design and anticipated challenges.
Conclusion:
Results of this study demonstrate a preference for anticoagulation and delayed revascularization after reimaging to examine for clot resolution, though much equipoise remains. While there is interest from international experts in future trials, further study is needed to understand the natural history of this condition in order to inform trial design.
Herbicides have been placed in global Herbicide Resistance Action Committee (HRAC) herbicide groups based on their sites of action (e.g., acetolactate synthase–inhibiting herbicides are grouped in HRAC Group 2). A major driving force for this classification system is that growers have been encouraged to rotate or mix herbicides from different HRAC groups to delay the evolution of herbicide-resistant weeds, because in theory, all active ingredients within a herbicide group physiologically affect weeds similarly. Although herbicide resistance in weeds has been studied for decades, recent research on the biochemical and molecular basis for resistance has demonstrated that patterns of cross-resistance are usually quite complicated and much more complex than merely stating, for example, a certain weed population is Group 2-resistant. The objective of this review article is to highlight and describe the intricacies associated with the magnitude of herbicide resistance and cross-resistance patterns that have resulted from myriad target-site and non–target site resistance mechanisms in weeds, as well as environmental and application timing influences. Our hope is this review will provide opportunities for students, growers, agronomists, ag retailers, regulatory personnel, and research scientists to better understand and realize that herbicide resistance in weeds is far more complicated than previously considered when based solely on HRAC groups. Furthermore, a comprehensive understanding of cross-resistance patterns among weed species and populations may assist in managing herbicide-resistant biotypes in the short term by providing growers with previously unconsidered effective control options. This knowledge may also inform agrochemical company efforts aimed at developing new resistance-breaking chemistries and herbicide mixtures. However, in the long term, nonchemical management strategies, including cultural, mechanical, and biological weed management tactics, must also be implemented to prevent or delay increasingly problematic issues with weed resistance to current and future herbicides.
This manuscript addresses a critical topic: navigating complexities of conducting clinical trials during a pandemic. Central to this discussion is engaging communities to ensure diverse participation. The manuscript elucidates deliberate strategies employed to recruit minority communities with poor social drivers of health for participation in COVID-19 trials. The paper adopts a descriptive approach, eschewing analysis of data-driven efficacy of these efforts, and instead provides a comprehensive account of strategies utilized. The Accelerate COVID-19 Treatment Interventions and Vaccines (ACTIV) public–private partnership launched early in the COVID-19 pandemic to develop clinical trials to advance SARS-CoV-2 treatments. In this paper, ACTIV investigators share challenges in conducting research during an evolving pandemic and approaches selected to engage communities when traditional strategies were infeasible. Lessons from this experience include importance of community representatives’ involvement early in study design and implementation and integration of well-developed public outreach and communication strategies with trial launch. Centralization and coordination of outreach will allow for efficient use of resources and the sharing of best practices. Insights gleaned from the ACTIV program, as outlined in this paper, shed light on effective strategies for involving communities in treatment trials amidst rapidly evolving public health emergencies. This underscores critical importance of community engagement initiatives well in advance of the pandemic.
In recent times, Health Professionals (HPs) people may feel a sense of discomfort and nervousness when disconnected from their smartphones, causing the emergence of the new phenomenon of “No Mobile Phone Phobia,” or Nomophobia.
Objectives
We aim to study lifestyle-related factors that influence HPs’ Nomophobia.
Methods
From April- June 2023, a global cross-sectional study was conducted using the modified Nomophobia questionnaire (NMP-Q). The original 20 NMP-Q questions (Qs) were reduced to 14 to avoid repetitive Qs with similar meanings. The Qs were categorized into 4 sections, A- Not Being Able to Access Information; B- Losing Connectedness; C- Not Being Able to Communicate; and D- Giving Up Convenience. A new section, “E- Daily Habits”, and “F- Smartphone Type”, and “Hours Spent Daily” were added. Before the launch, it was internally and externally validated by trained psychiatrists as well as experienced researchers. We utilized social media, WhatsApp, text and emails to share it with HPs of different specialties worldwide. The survey was anonymous and IRB-exempt.
Results
Total 105 countries’ HPs participation led to 12,253 responses. Total 47.3% of HPs agreed/strongly agreed (A/SA) that they prefer to use their smartphone before bedtime. Over half (57.8%) of HPs A/SA checked their notifications immediately after waking up in the morning. Only 19.4 % of HPs A/SA that woke up in the middle of the night to check notifications. Total 40.5% of HPs A/SA, 22% were neutral, and 37.3% of HPs disagreed /strongly disagreed (D/SD) with using smartphones while eating their meals. A total of 52.7% of HPs preferred smartphone usage over exercising as a break, while 454.9% of HPs A/SA that they chose smartphones over exploring other hobbies for relaxation. A total of 44.2% of respondents A/SA with smartphone usage in the restroom, 39.8% D/SD. 37.4% of participants D/SD with getting distracted by notifications and resisted the urge to answer any calls or texts while performing a focused task, whereas 39.6% A/SA and 23% were neutral. A total of 80% of respondents met the modified criteria for moderate-severe nomophobia.
Conclusions
In a large-scale survey-based study on Nomophobia, additional Qs in NMP-Q may help recognize that nomophobia can be a result of daily lifestyle decisions rather than an isolated issue.
Background: Duchenne muscular dystrophy (DMD) is caused by DMD gene mutations. Delandistrogene moxeparvovec is an investigational gene transfer therapy, developed to address the underlying cause of DMD. We report findings from Part 1 (52 weeks) of the two-part EMBARK trial (NCT05096221). Methods: Key inclusion criteria: Ambulatory patients aged ≥4-<8 years with a confirmed DMD mutation within exons 18–79 (inclusive); North Star Ambulatory Assessment (NSAA) score >16 and <29 at screening. Eligible patients were randomized 1:1 to intravenous delandistrogene moxeparvovec (1.33×1014 vg/kg) or placebo. The primary endpoint was change from baseline in NSAA total score to Week 52. Results: At Week 52 (n=125), the primary endpoint did not reach statistical significance, although there was a nominal difference in change from baseline in NSAA total score in the delandistrogene moxeparvovec (2.6, n=63) versus placebo groups (1.9, n=61). Key secondary endpoints (time to rise, micro-dystrophin expression, 10-meter walk/run) demonstrated treatment benefit in both age groups (4-5 and 6-7 years; p<0.05).There were no new safety signals, reinforcing the favorable and manageable safety profile observed to date. Conclusions: Based on the totality of functional assessments including the timed function tests, treatment with delandistrogene moxeparvovec indicates beneficial modification of disease trajectory.
Polynuclear hydroxy-Al cations were prepared by partially neutralizing dilute solutions of aluminum chloride. These cations were introduced in the interlayer space of montmorillonite by cation exchange, which formed heat-stable pillars between the silicate layers. Polynuclear hydroxy-Al was preferentially adsorbed on montmorillonite compared with monomer-Al; the maximum amount adsorbed was ∼400 meq/100 g of montmorillonite. Of this amount 320 meq was non-exchangeable. The 001 X-ray powder diffraction reflection of the polynuclear hydroxy-Al-montmorillonite complex was at 27 Å, with four additional higher-order basal reflections, giving an average d(001) value of 28.4 Å. This complex was thermally stable to 700°C. An analysis of the basal reflections by the Fourier transform method indicated that the 28-Å complex had a relatively regular interstratified structure of 9.6- and 18.9-Å component layers with a mixing ratio of 0.46:0.54. This ratio implies that the hydroxy-Al pillars occupied every second layer. Considering the relatively small amount of Al adsorbed and the thermally stable nature of the structure, the hydroxy-Al pillars must have been sparsely but homogeneously distributed in the interlayer space.
A total of 108 diverse sorghum (Sorghum bicolor) accessions were characterized for quantitative and qualitative fodder-related traits and zonate leaf spot (ZLS) (Gloeocercospora sorghi) disease during two successive wet seasons of 2019 and 2020 in augmented randomized block design. The Shannon's diversity index and analysis of variance showed the existence of significant variability among qualitative and quantitative traits. K-mean clustering showed strong relationship between green fodder yield (GFY) and other yield-contributing traits. The dendrogram constructed based on morphological traits classified accessions into four diverse groups and most of genotype fall under cluster II. The principal component analysis bi-plot analysis showed a total variation of 68.96%, where GFY, stem weight per plant, panicle length and dry matter yield (DMY) contributed significantly. From the experimental results, three sorghum genotypes viz., IG-03-424, IG-01-436 and IG-03-438 were identified as promising for higher GFY (808.66 g/plant) and DMY (238.0 g/plant), respectively. Further, based on disease reactions under natural condition, five genotypes viz., EC-512397, EC512393, EC512394, EC512399 and IG-02-437 were identified as potential donor for resistance to ZLS disease. These selected lines could be used as promising sources for high biomass and disease resistance in forage sorghum breeding programme.
Tight focusing with very small f-numbers is necessary to achieve the highest at-focus irradiances. However, tight focusing imposes strong demands on precise target positioning in-focus to achieve the highest on-target irradiance. We describe several near-infrared, visible, ultraviolet and soft and hard X-ray diagnostics employed in a ∼1022 W/cm2 laser–plasma experiment. We used nearly 10 J total energy femtosecond laser pulses focused into an approximately 1.3-μm focal spot on 5–20 μm thick stainless-steel targets. We discuss the applicability of these diagnostics to determine the best in-focus target position with approximately 5 μm accuracy (i.e., around half of the short Rayleigh length) and show that several diagnostics (in particular, 3$\omega$ reflection and on-axis hard X-rays) can ensure this accuracy. We demonstrated target positioning within several micrometers from the focus, ensuring over 80% of the ideal peak laser intensity on-target. Our approach is relatively fast (it requires 10–20 laser shots) and does not rely on the coincidence of low-power and high-power focal planes.
This paper proposes a lightweight frequency selective surface polarization-insensitive wideband metamaterial absorber in C band and X band that employs only a few resistive elements. The proposed absorber is embodied with four quadrature slotted inner circular patch, which is horizontally and vertically bisected, and outer concentric copper rings of 0.035 mm thickness are attached with four lumped resistors placed at 90° apart. A slotted inner circular patch provides significant inductive and capacitive loading. The absorption bandwidth of 8.02 GHz with more than 90% absorption is observed from 5.69 to 13.71 GHz under normal incidence and maintains almost same absorptivity range under oblique incidence up to 45° in both transverse electric mode and transverse magnetic mode. The designed metamaterial absorber is fabricated and measured using free space measurement technique. The actual experiments and the simulated ones are in good agreement.
Despite associations between hypoglycemia and cognitive performance using cross-sectional and experimental methods (e.g., Insulin clamp studies), few studies have evaluated this relationship in a naturalistic setting. This pilot study utilizes an EMA study design in adults with T1D to examine the impact of hypoglycemia and hyperglycemia, measured using CGM, on cognitive performance, measured via ambulatory assessment.
Participants and Methods:
Twenty adults with T1D (mean age 38.9 years, range 26-67; 55% female; 55% bachelor’s degree or higher; mean HbA1c = 8.3%, range 5.4% - 12.5%), were recruited from the Joslin Diabetes Center at SUNY Upstate Medical University. A blinded Dexcom G6 CGM was worn during everyday activities while completing 3-6 daily EMAs using personal smartphones. EMAs were delivered between 9 am and 9 pm, for 15 days. EMAs included 3 brief cognitive tests developed by testmybrain.org and validated for brief mobile administration (Gradual Onset CPT d-prime, Digit Symbol Matching median reaction time, Multiple Object Tracking percent accuracy) and self-reported momentary negative affect. Day-level average scores were calculated for the cognitive and negative affect measures. Hypoglycemia and hyperglycemia were defined as the percentage of time spent with a sensor glucose value <70 mg/dL or > 180 mg/dL, respectively. Daytime (8 am to 9 pm) and nighttime (9 pm to 8 am) glycemic excursions were calculated separately. Multilevel models estimated the between- and within-person association between the night prior to, or the same day, time spent in hypoglycemia or hyperglycemia and cognitive performance (each cognitive test was modeled separately). To evaluate the effect of between-person differences, person-level variables were calculated as the mean across the study and grand-mean centered. To evaluate the effect of within-person fluctuations, day-level variables were calculated as deviations from these person-level means.
Results:
Within-person fluctuations in nighttime hypoglycemia were associated with daytime processing speed. Specifically, participants who spent a higher percentage of time in hypoglycemia than their average percentage the night prior to assessment performed slower than their average performance on the processing speed test (Digit Symbol Matching median reaction time, b = 94.16, p = 0.042), while same day variation in hypoglycemia was not associated with variation in Digit Symbol Matching performance. This association remained significant (b = 97.46, p = 0.037) after controlling for within-person and between-person effects of negative affect. There were no significant within-person associations between time spent in hyperglycemia and Digit Symbol Matching, nor day/night hypoglycemia or hyperglycemia and Gradual Onset CPT or Multiple Object Tracking.
Conclusions:
Our findings from this EMA study suggest that when individuals with T1D experience more time in hypoglycemia at night (compared to their average), they have slower processing speed the following day, while same day hypoglycemia and hyperglycemia does not similarly impact processing speed performance. These results showcase the power of intensive longitudinal designs using ambulatory cognitive assessment to uncover novel determinants of cognitive variation in real world settings that have direct clinical applications for optimizing cognitive performance. Future research with larger samples is needed to replicate these findings.
An 83-year-old patient with hypertension, diabetes, chronic kidney disease, and coronary artery disease requiring multiple stents presents to the emergency department with worsening dyspnea and syncope. Physical exam reveals a systolic murmur and bedside echocardiogram is consistent with severe aortic stenosis. He is referred to cardiology and cardiothoracic surgery for treatment. What aortic valve interventions are options? When would a transcatheter aortic valve replacement (TAVR) procedure be indicated? What are the anesthetic considerations?
An isolated population of 700 specimens initially described as Corynosoma strumosum (Rudolphi, 1802) Lühe, 1904 and currently reassigned to Corynosoma neostrumosum n. sp. was collected from one young male Caspian seal, Pusa caspica (Gmelin) in the southern land-locked Caspian Sea in April 2009. Collected worms were morphologically unique compared with those reported by other observers in open waters, especially in shape and distribution of proboscis hooks and trunk spines, dorso-ventral differences in proboscis hooks and their organization, the baldness of anterior proboscis, consistently smaller size of trunk and testes, larger eggs, the rough egg topography, epidermal micropores, and variations in the female gonopore. Molecular data from the internal transcribed spacer region of rDNA and the mitochondrial cox1 gene was also provided to supplement the morphological study of the new species.
This work describes the mineralogy of dolomite carbonatite occurring at the Newania carbonatite complex, Rajasthan, north-western India. The mineralogy records the textural and compositional features of magmatic and post-magmatic stages of carbonatite evolution. Ferroan dolomite is the principal constituent and displays variable degrees of deformation, ranging from brittle-to-ductile deformation regimes. Apatite exhibits textural and compositional evolutionary trends from early-to-late stages of carbonatite evolution. Two varieties of amphibole are reported for the first time from this complex, ferri-winchite and cummingtonite; the former is magmatic and the latter is metamorphic in origin. The columbite–tantalite-series minerals are columbite-(Fe), and their paragenesis evolves from composite grains with pyrochlore to individual crystals. Pyrochlore is magmatic with U–Ta–Ti-rich compositions and shows evolution from calciopyrochlore to kenopyrochlore, followed by alteration during late-stages of carbonatite evolution. Monazite and baryte constitute the post-magmatic mineral assemblage; the former is hydrothermal and crystallised after precursor apatite, whereas the latter is associated exclusively with columbite–pyrochlore composites. On the basis of the mineralogy of the carbonatite, it is concluded that the parent magma was generated by low-degree partial melting of magnesite–phlogopite-bearing peridotite.
Background: Sex differences in treatment response to intravenous thrombolysis (IVT) are poorly characterized. We compared sex-disaggregated outcomes in patients receiving IVT for acute ischemic stroke in the Alteplase compared to Tenecteplase (AcT) trial, a Canadian multicentre, randomised trial. Methods: In this post-hoc analysis, the primary outcome was excellent functional outcome (modified Rankin Score [mRS] 0-1) at 90 days. Secondary and safety outcomes included return to baseline function, successful reperfusion (eTICI≥2b), death and symptomatic intracerebral hemorrhage. Results: Of 1577 patients, there were 755 women and 822 men (median age 77 [68-86]; 70 [59-79]). There were no differences in rates of mRS 0-1 (aRR 0.95 [0.86-1.06]), return to baseline function (aRR 0.94 [0.84-1.06]), reperfusion (aRR 0.98 [0.80-1.19]) and death (aRR 0.91 [0.79-1.18]). There was no effect modification by treatment type on the association between sex and outcomes. The probability of excellent functional outcome decreased with increasing onset-to-needle time. This relation did not vary by sex (pinteraction 0.42). Conclusions: The AcT trial demonstrated comparable functional, safety and angiographic outcomes by sex. This effect did not differ between alteplase and tenecteplase. The pragmatic enrolment and broad national participation in AcT provide reassurance that there do not appear to be sex differences in outcomes amongst Canadians receiving IVT.
Quantify the frequency and drivers of unreported coronavirus disease 2019 (COVID-19) symptoms among nursing home (NH) staff.
Design:
Confidential telephone survey.
Setting:
The study was conducted in 70 NHs in Orange County, California, December 2020–February 2022.
Participants:
The study included 120 NH staff with COVID-19.
Methods:
We designed a 40-item telephone survey of NH staff to assess COVID-19 symptom reporting behavior and types of barriers [monetary, logistic, and emotional (fear or stigma)] and facilitators of symptom reporting using 5-point Likert scales. Summary statistics, reliability of survey constructs, and construct and discriminant validity were assessed.
Results:
Overall, 49% of surveys were completed during the 2020–2021 COVID-19 winter wave and 51% were completed during severe acute respiratory coronavirus virus 2 (SARS-CoV-2) δ (delta)/ (omicron) waves, with a relatively even distribution of certified nursing assistants, licensed vocational or registered nurses, and nonfrontline staff. Most COVID-19 cases (71%) were detected during mandated weekly NH surveillance testing and most staff (67%) had ≥1 symptom prior to their test. Only 34% of those with symptoms disclosed their symptom to a supervisor. Responses were consistent across 8 discrete survey constructs with Cronbach α > 0.70. In the first wave of the pandemic, fear and lack of knowledge were drivers of symptom reporting. In later waves, adequate staffing and sick days were drivers of symptom reporting. COVID-19 help lines and encouragement from supervisors facilitated symptom reporting and testing.
Conclusions:
Mandatory COVID-19 testing for NH staff is key to identifying staff COVID-19 cases due to reluctance to speak up about existing symptoms. Active encouragement from supervisors to report symptoms and stay home when ill was a major driver of symptom reporting and resultant infection prevention and worker safety measures.