We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent changes to US research funding are having far-reaching consequences that imperil the integrity of science and the provision of care to vulnerable populations. Resisting these changes, the BJPsych Portfolio reaffirms its commitment to publishing mental science and advancing psychiatric knowledge that improves the mental health of one and all.
The stars of the Milky Way carry the chemical history of our Galaxy in their atmospheres as they journey through its vast expanse. Like barcodes, we can extract the chemical fingerprints of stars from high-resolution spectroscopy. The fourth data release (DR4) of the Galactic Archaeology with HERMES (GALAH) Survey, based on a decade of observations, provides the chemical abundances of up to 32 elements for 917 588 stars that also have exquisite astrometric data from the Gaia satellite. For the first time, these elements include life-essential nitrogen to complement carbon, and oxygen as well as more measurements of rare-earth elements critical to modern-life electronics, offering unparalleled insights into the chemical composition of the Milky Way. For this release, we use neural networks to simultaneously fit stellar parameters and abundances across the whole wavelength range, leveraging synthetic grids computed with Spectroscopy Made Easy. These grids account for atomic line formation in non-local thermodynamic equilibrium for 14 elements. In a two-iteration process, we first fit stellar labels to all 1 085 520 spectra, then co-add repeated observations and refine these labels using astrometric data from Gaia and 2MASS photometry, improving the accuracy and precision of stellar parameters and abundances. Our validation thoroughly assesses the reliability of spectroscopic measurements and highlights key caveats. GALAH DR4 represents yet another milestone in Galactic archaeology, combining detailed chemical compositions from multiple nucleosynthetic channels with kinematic information and age estimates. The resulting dataset, covering nearly a million stars, opens new avenues for understanding not only the chemical and dynamical history of the Milky Way but also the broader questions of the origin of elements and the evolution of planets, stars, and galaxies.
Posttraumatic stress disorder (PTSD) has been associated with advanced epigenetic age cross-sectionally, but the association between these variables over time is unclear. This study conducted meta-analyses to test whether new-onset PTSD diagnosis and changes in PTSD symptom severity over time were associated with changes in two metrics of epigenetic aging over two time points.
Methods
We conducted meta-analyses of the association between change in PTSD diagnosis and symptom severity and change in epigenetic age acceleration/deceleration (age-adjusted DNA methylation age residuals as per the Horvath and GrimAge metrics) using data from 7 military and civilian cohorts participating in the Psychiatric Genomics Consortium PTSD Epigenetics Workgroup (total N = 1,367).
Results
Meta-analysis revealed that the interaction between Time 1 (T1) Horvath age residuals and new-onset PTSD over time was significantly associated with Horvath age residuals at T2 (meta β = 0.16, meta p = 0.02, p-adj = 0.03). The interaction between T1 Horvath age residuals and changes in PTSD symptom severity over time was significantly related to Horvath age residuals at T2 (meta β = 0.24, meta p = 0.05). No associations were observed for GrimAge residuals.
Conclusions
Results indicated that individuals who developed new-onset PTSD or showed increased PTSD symptom severity over time evidenced greater epigenetic age acceleration at follow-up than would be expected based on baseline age acceleration. This suggests that PTSD may accelerate biological aging over time and highlights the need for intervention studies to determine if PTSD treatment has a beneficial effect on the aging methylome.
Objectives/Goals: We describe the prevalence of individuals with household exposure to SARS-CoV-2, who subsequently report symptoms consistent with COVID-19, while having PCR results persistently negative for SARS-CoV-2 (S[+]/P[-]). We assess whether paired serology can assist in identifying the true infection status of such individuals. Methods/Study Population: In a multicenter household transmission study, index patients with SARS-CoV-2 were identified and enrolled together with their household contacts within 1 week of index’s illness onset. For 10 consecutive days, enrolled individuals provided daily symptom diaries and nasal specimens for polymerase chain reaction (PCR). Contacts were categorized into 4 groups based on presence of symptoms (S[+/-]) and PCR positivity (P[+/-]). Acute and convalescent blood specimens from these individuals (30 days apart) were subjected to quantitative serologic analysis for SARS-CoV-2 anti-nucleocapsid, spike, and receptor-binding domain antibodies. The antibody change in S[+]/P[-] individuals was assessed by thresholds derived from receiver operating characteristic (ROC) analysis of S[+]/P[+] (infected) versusS[-]/P[-] (uninfected). Results/Anticipated Results: Among 1,433 contacts, 67% had ≥1 SARS-CoV-2 PCR[+] result, while 33% remained PCR[-]. Among the latter, 55% (n = 263) reported symptoms for at least 1 day, most commonly congestion (63%), fatigue (63%), headache (62%), cough (59%), and sore throat (50%). A history of both previous infection and vaccination was present in 37% of S[+]/P[-] individuals, 38% of S[-]/P[-], and 21% of S[+]/P[+] (P<0.05). Vaccination alone was present in 37%, 41%, and 52%, respectively. ROC analyses of paired serologic testing of S[+]/P[+] (n = 354) vs. S[-]/P[-] (n = 103) individuals found anti-nucleocapsid data had the highest area under the curve (0.87). Based on the 30-day antibody change, 6.9% of S[+]/P[-] individuals demonstrated an increased convalescent antibody signal, although a similar seroresponse in 7.8% of the S[-]/P[-] group was observed. Discussion/Significance of Impact: Reporting respiratory symptoms was common among household contacts with persistent PCR[-] results. Paired serology analyses found similar seroresponses between S[+]/P[-] and S[-]/P[-] individuals. The symptomatic-but-PCR-negative phenomenon, while frequent, is unlikely attributable to true SARS-CoV-2 infections that go missed by PCR.
Inappropriate diagnosis and treatment of urinary tract infections (UTIs) contribute to antibiotic overuse. The Inappropriate Diagnosis of UTI (ID-UTI) measure uses a standard definition of asymptomatic bacteriuria (ASB) and was validated in large hospitals. Critical access hospitals (CAHs) have different resources which may make ASB stewardship challenging. To address this inequity, we adapted the ID-UTI metric for use in CAHs and assessed the adapted measure’s feasibility, validity, and reliability.
Design:
Retrospective observational study
Participants:
10 CAHs
Methods:
From October 2022 to July 2023, CAHs submitted clinical information for adults admitted or discharged from the emergency department who received antibiotics for a positive urine culture. Feasibility of case submission was assessed as the number of CAHs achieving the goal of 59 cases. Validity (sensitivity/specificity) and reliability of the ID-UTI definition were assessed by dual-physician review of a random sample of submitted cases.
Results:
Among 10 CAHs able to participate throughout the study period, only 40% (4/10) submitted >59 cases (goal); an additional 3 submitted >35 cases (secondary goal). Per the ID-UTI metric, 28% (16/58) of cases were ASB. Compared to physician review, the ID-UTI metric had 100% specificity (ie all cases called ASB were ASB on clinical review) but poor sensitivity (48.5%; ie did not identify all ASB cases). Measure reliability was high (93% [54/58] agreement).
Conclusions:
Similar to measure performance in non-CAHs, the ID-UTI measure had high reliability and specificity—all cases identified as ASB were considered ASB—but poor sensitivity. Though feasible for a subset of CAHs, barriers remain.
Asymptomatic bacteriuria (ASB) treatment is a common form of antibiotic overuse and diagnostic error. Antibiotic stewardship using the inappropriate diagnosis of urinary tract infection (ID-UTI) measure has reduced ASB treatment in diverse hospitals. However, critical access hospitals (CAHs) have differing resources that could impede stewardship. We aimed to determine if stewardship including the ID-UTI measure could reduce ASB treatment in CAHs.
Methods:
From October 2022 to July 2023, ten CAHs participated in an Intensive Quality Improvement Cohort (IQIC) program including 3 interventions to reduce ASB treatment: 1) learning labs (ie, didactics with shared learning), 2) mentoring, and 3) data-driven performance reports including hospital peer comparison based on the ID-UTI measure. To assess effectiveness of the IQIC program, change in the ID-UTI measure (ie, percentage of patients treated for a UTI who had ASB) was compared to two non-equivalent control outcomes (antibiotic duration and unjustified fluoroquinolone use).
Results:
Ten CAHs abstracted a total of 608 positive urine culture cases. Over the cohort period, the percentage of patients treated for a UTI who had ASB declined (aOR per month = 0.935, 95% CI: 0.873, 1.001, P = 0.055) from 28.4% (range across hospitals, 0%-63%) in the first to 18.6% (range, 0%-33%) in the final month. In contrast, antibiotic duration and unjustified fluoroquinolone use were unchanged (P = 0.768 and 0.567, respectively).
Conclusions:
The IQIC intervention, including learning labs, mentoring, and performance reports using the ID-UTI measure, was associated with a non-significant decrease in treatment of ASB, while control outcomes (duration and unjustified fluoroquinolone use) did not change.
Accelerating COVID-19 Treatment Interventions and Vaccines (ACTIV) was initiated by the US government to rapidly develop and test vaccines and therapeutics against COVID-19 in 2020. The ACTIV Therapeutics-Clinical Working Group selected ACTIV trial teams and clinical networks to expeditiously develop and launch master protocols based on therapeutic targets and patient populations. The suite of clinical trials was designed to collectively inform therapeutic care for COVID-19 outpatient, inpatient, and intensive care populations globally. In this report, we highlight challenges, strategies, and solutions around clinical protocol development and regulatory approval to document our experience and propose plans for future similar healthcare emergencies.
Diagnostic criteria for major depressive disorder allow for heterogeneous symptom profiles but genetic analysis of major depressive symptoms has the potential to identify clinical and etiological subtypes. There are several challenges to integrating symptom data from genetically informative cohorts, such as sample size differences between clinical and community cohorts and various patterns of missing data.
Methods
We conducted genome-wide association studies of major depressive symptoms in three cohorts that were enriched for participants with a diagnosis of depression (Psychiatric Genomics Consortium, Australian Genetics of Depression Study, Generation Scotland) and three community cohorts who were not recruited on the basis of diagnosis (Avon Longitudinal Study of Parents and Children, Estonian Biobank, and UK Biobank). We fit a series of confirmatory factor models with factors that accounted for how symptom data was sampled and then compared alternative models with different symptom factors.
Results
The best fitting model had a distinct factor for Appetite/Weight symptoms and an additional measurement factor that accounted for the skip-structure in community cohorts (use of Depression and Anhedonia as gating symptoms).
Conclusion
The results show the importance of assessing the directionality of symptoms (such as hypersomnia versus insomnia) and of accounting for study and measurement design when meta-analyzing genetic association data.
Medical researchers are increasingly prioritizing the inclusion of underserved communities in clinical studies. However, mere inclusion is not enough. People from underserved communities frequently experience chronic stress that may lead to accelerated biological aging and early morbidity and mortality. It is our hope and intent that the medical community come together to engineer improved health outcomes for vulnerable populations. Here, we introduce Health Equity Engineering (HEE), a comprehensive scientific framework to guide research on the development of tools to identify individuals at risk of poor health outcomes due to chronic stress, the integration of these tools within existing healthcare system infrastructures, and a robust assessment of their effectiveness and sustainability. HEE is anchored in the premise that strategic intervention at the individual level, tailored to the needs of the most at-risk people, can pave the way for achieving equitable health standards at a broader population level. HEE provides a scientific framework guiding health equity research to equip the medical community with a robust set of tools to enhance health equity for current and future generations.
Seismic imaging in 3-D holds great potential for improving our understanding of ice sheet structure and dynamics. Conducting 3-D imaging in remote areas is simplified by using lightweight and logistically straightforward sources. We report results from controlled seismic source tests carried out near the West Antarctic Ice Sheet Divide investigating the characteristics of two types of surface seismic sources, Poulter shots and detonating cord, for use in both 2-D and 3-D seismic surveys on glaciers. Both source types produced strong basal P-wave and S-wave reflections and multiples recorded in three components. The Poulter shots had a higher amplitude for low frequencies (<10 Hz) and comparable amplitude at high frequencies (>50 Hz) relative to the detonating cord. Amplitudes, frequencies, speed of source set-up, and cost all suggested Poulter shots to be the preferred surface source compared to detonating cord for future 2-D and 3-D seismic surveys on glaciers.
The acid-catalyzed reaction between methanol and isobutene to give methyl-t-butyl ether may be carried out using a cation-exchanged smectite as the catalyst. In 1,4-dioxan solvent at 60°C smectites exchanged with Al3+, Fe3+, or Cr3+ give yields of ∼60% after 4 hr, whereas smectites exchanged with Cu2+, Pb2+, Ni2+, Co2+, Ca2+, and Na+ give less than ∼8% yield. The reaction is efficient only when certain solvents are used, e.g., with Al3+-smectite the yield is ∼5% when using 1,2-dimethoxyethane, diethyleneglycol diethylether, n-pentane, tetrahydropyran, N-methylmorpholine, or tetrahydrofuran solvents compared with ∼60% using 1,4-dioxan solvent (4 hr). Moreover, the effective solvents depend somewhat on the clay interlayer cation. The use of tetrahydrofuran and tetrahydropyran gives ∼35% yields at 60°C (4 hr) with Fe3+- or Cr3+-smectites but ∼4% yield with Al3+-smectite.
The reaction of 2-methyl pent-2-ene with primary alcohols (C1-C18) at 95°C over an Al-montmorillonite gave yields of 20–90% of ethers of the type R-O-C(CH3)2C3H7. Lower yields were produced if secondary alcohols were employed, and tertiary alcohols gave only a trace of this ether. When a variety of alkenes was reacted with butan-1-ol at 95°C over a similar catalyst, no reaction occurred unless the alkene was capable of forming a tertiary carbonium ion immediately upon protonation. In this case the product was the tertiary ether t-R-O-nC4H9. However, at a reaction temperature of 150°C a variety of products were formed including (1) ether by the attack of butanol on the carbonium ions produced either directly from protonation of the alkenes or by hydride shift from such an ion, (2) alkenes by the attack of n-C4H9+ ions (derived from protonation and dehydration of butanol) on the alkene, (3) di-(but-1-yl) ether by dehydration of the butanol, and (4) small amounts of alcohol by hydration of the alkene. The differences in reactivity below and above 100°C are related directly to the amount of water present in the interlayer space of the clay and the degree of acidity found there. Although the clay behaves as an acid catalyst, the reactions are far cleaner (more selective) than comparable reactions catalyzed by sulfuric acid.
To compare the agreement and cost of two recall methods for estimating children’s minimum dietary diversity (MDD).
Design:
We assessed child’s dietary intake on two consecutive days: an observation on day one, followed by two recall methods (list-based recall and multiple-pass recall) administered in random order by different enumerators at two different times on day two. We compared the estimated MDD prevalence using survey-weighted linear probability models following a two one-sided test equivalence testing approach. We also estimated the cost-effectiveness of the two methods.
Setting:
Cambodia (Kampong Thom, Siem Reap, Battambang, and Pursat provinces) and Zambia (Chipata, Katete, Lundazi, Nyimba, and Petauke districts).
Participants:
Children aged 6–23 months: 636 in Cambodia and 608 in Zambia.
Results:
MDD estimations from both recall methods were equivalent to the observation in Cambodia but not in Zambia. Both methods were equivalent to the observation in capturing most food groups. Both methods were highly sensitive although the multiple-pass method accurately classified a higher proportion of children meeting MDD than the list-based method in both countries. Both methods were highly specific in Cambodia but moderately so in Zambia. Cost-effectiveness was better for the list-based recall method in both countries.
Conclusion:
The two recall methods estimated MDD and most other infant and young child feeding indicators equivalently in Cambodia but not in Zambia, compared to the observation. The list-based method produced slightly more accurate estimates of MDD at the population level, took less time to administer and was less costly to implement.
Cognitive training has shown promise for improving cognition in older adults. Aging involves a variety of neuroanatomical changes that may affect response to cognitive training. White matter hyperintensities (WMH) are one common age-related brain change, as evidenced by T2-weighted and Fluid Attenuated Inversion Recovery (FLAIR) MRI. WMH are associated with older age, suggestive of cerebral small vessel disease, and reflect decreased white matter integrity. Higher WMH load associates with reduced threshold for clinical expression of cognitive impairment and dementia. The effects of WMH on response to cognitive training interventions are relatively unknown. The current study assessed (a) proximal cognitive training performance following a 3-month randomized control trial and (b) the contribution of baseline whole-brain WMH load, defined as total lesion volume (TLV), on pre-post proximal training change.
Participants and Methods:
Sixty-two healthy older adults ages 65-84 completed either adaptive cognitive training (CT; n=31) or educational training control (ET; n=31) interventions. Participants assigned to CT completed 20 hours of attention/processing speed training and 20 hours of working memory training delivered through commercially-available Posit Science BrainHQ. ET participants completed 40 hours of educational videos. All participants also underwent sham or active transcranial direct current stimulation (tDCS) as an adjunctive intervention, although not a variable of interest in the current study. Multimodal MRI scans were acquired during the baseline visit. T1- and T2-weighted FLAIR images were processed using the Lesion Segmentation Tool (LST) for SPM12. The Lesion Prediction Algorithm of LST automatically segmented brain tissue and calculated lesion maps. A lesion threshold of 0.30 was applied to calculate TLV. A log transformation was applied to TLV to normalize the distribution of WMH. Repeated-measures analysis of covariance (RM-ANCOVA) assessed pre/post change in proximal composite (Total Training Composite) and sub-composite (Processing Speed Training Composite, Working Memory Training Composite) measures in the CT group compared to their ET counterparts, controlling for age, sex, years of education and tDCS group. Linear regression assessed the effect of TLV on post-intervention proximal composite and sub-composite, controlling for baseline performance, intervention assignment, age, sex, years of education, multisite scanner differences, estimated total intracranial volume, and binarized cardiovascular disease risk.
Results:
RM-ANCOVA revealed two-way group*time interactions such that those assigned cognitive training demonstrated greater improvement on proximal composite (Total Training Composite) and sub-composite (Processing Speed Training Composite, Working Memory Training Composite) measures compared to their ET counterparts. Multiple linear regression showed higher baseline TLV associated with lower pre-post change on Processing Speed Training sub-composite (ß = -0.19, p = 0.04) but not other composite measures.
Conclusions:
These findings demonstrate the utility of cognitive training for improving postintervention proximal performance in older adults. Additionally, pre-post proximal processing speed training change appear to be particularly sensitive to white matter hyperintensity load versus working memory training change. These data suggest that TLV may serve as an important factor for consideration when planning processing speed-based cognitive training interventions for remediation of cognitive decline in older adults.
Aging is associated with disruptions in functional connectivity within the default mode (DMN), frontoparietal control (FPCN), and cingulo-opercular (CON) resting-state networks. Greater within-network connectivity predicts better cognitive performance in older adults. Therefore, strengthening network connectivity, through targeted intervention strategies, may help prevent age-related cognitive decline or progression to dementia. Small studies have demonstrated synergistic effects of combining transcranial direct current stimulation (tDCS) and cognitive training (CT) on strengthening network connectivity; however, this association has yet to be rigorously tested on a large scale. The current study leverages longitudinal data from the first-ever Phase III clinical trial for tDCS to examine the efficacy of an adjunctive tDCS and CT intervention on modulating network connectivity in older adults.
Participants and Methods:
This sample included 209 older adults (mean age = 71.6) from the Augmenting Cognitive Training in Older Adults multisite trial. Participants completed 40 hours of CT over 12 weeks, which included 8 attention, processing speed, and working memory tasks. Participants were randomized into active or sham stimulation groups, and tDCS was administered during CT daily for two weeks then weekly for 10 weeks. For both stimulation groups, two electrodes in saline-soaked 5x7 cm2 sponges were placed at F3 (cathode) and F4 (anode) using the 10-20 measurement system. The active group received 2mA of current for 20 minutes. The sham group received 2mA for 30 seconds, then no current for the remaining 20 minutes.
Participants underwent resting-state fMRI at baseline and post-intervention. CONN toolbox was used to preprocess imaging data and conduct region of interest (ROI-ROI) connectivity analyses. The Artifact Detection Toolbox, using intermediate settings, identified outlier volumes. Two participants were excluded for having greater than 50% of volumes flagged as outliers. ROI-ROI analyses modeled the interaction between tDCS group (active versus sham) and occasion (baseline connectivity versus postintervention connectivity) for the DMN, FPCN, and CON controlling for age, sex, education, site, and adherence.
Results:
Compared to sham, the active group demonstrated ROI-ROI increases in functional connectivity within the DMN following intervention (left temporal to right temporal [T(202) = 2.78, pFDR < 0.05] and left temporal to right dorsal medial prefrontal cortex [T(202) = 2.74, pFDR < 0.05]. In contrast, compared to sham, the active group demonstrated ROI-ROI decreases in functional connectivity within the FPCN following intervention (left dorsal prefrontal cortex to left temporal [T(202) = -2.96, pFDR < 0.05] and left dorsal prefrontal cortex to left lateral prefrontal cortex [T(202) = -2.77, pFDR < 0.05]). There were no significant interactions detected for CON regions.
Conclusions:
These findings (a) demonstrate the feasibility of modulating network connectivity using tDCS and CT and (b) provide important information regarding the pattern of connectivity changes occurring at these intervention parameters in older adults. Importantly, the active stimulation group showed increases in connectivity within the DMN (a network particularly vulnerable to aging and implicated in Alzheimer’s disease) but decreases in connectivity between left frontal and temporal FPCN regions. Future analyses from this trial will evaluate the association between these changes in connectivity and cognitive performance post-intervention and at a one-year timepoint.
Nonpathological aging has been linked to decline in both verbal and visuospatial memory abilities in older adults. Disruptions in resting-state functional connectivity within well-characterized, higherorder cognitive brain networks have also been coupled with poorer memory functioning in healthy older adults and in older adults with dementia. However, there is a paucity of research on the association between higherorder functional connectivity and verbal and visuospatial memory performance in the older adult population. The current study examines the association between resting-state functional connectivity within the cingulo-opercular network (CON), frontoparietal control network (FPCN), and default mode network (DMN) and verbal and visuospatial learning and memory in a large sample of healthy older adults. We hypothesized that greater within-network CON and FPCN functional connectivity would be associated with better immediate verbal and visuospatial memory recall. Additionally, we predicted that within-network DMN functional connectivity would be associated with improvements in delayed verbal and visuospatial memory recall. This study helps to glean insight into whether within-network CON, FPCN, or DMN functional connectivity is associated with verbal and visuospatial memory abilities in later life.
Participants and Methods:
330 healthy older adults between 65 and 89 years old (mean age = 71.6 ± 5.2) were recruited at the University of Florida (n = 222) and the University of Arizona (n = 108). Participants underwent resting-state fMRI and completed verbal memory (Hopkins Verbal Learning Test - Revised [HVLT-R]) and visuospatial memory (Brief Visuospatial Memory Test - Revised [BVMT-R]) measures. Immediate (total) and delayed recall scores on the HVLT-R and BVMT-R were calculated using each test manual’s scoring criteria. Learning ratios on the HVLT-R and BVMT-R were quantified by dividing the number of stimuli (verbal or visuospatial) learned between the first and third trials by the number of stimuli not recalled after the first learning trial. CONN Toolbox was used to extract average within-network connectivity values for CON, FPCN, and DMN. Hierarchical regressions were conducted, controlling for sex, race, ethnicity, years of education, number of invalid scans, and scanner site.
Results:
Greater CON connectivity was significantly associated with better HVLT-R immediate (total) recall (ß = 0.16, p = 0.01), HVLT-R learning ratio (ß = 0.16, p = 0.01), BVMT-R immediate (total) recall (ß = 0.14, p = 0.02), and BVMT-R delayed recall performance (ß = 0.15, p = 0.01). Greater FPCN connectivity was associated with better BVMT-R learning ratio (ß = 0.13, p = 0.04). HVLT-R delayed recall performance was not associated with connectivity in any network, and DMN connectivity was not significantly related to any measure.
Conclusions:
Connectivity within CON demonstrated a robust relationship with different components of memory function as well across verbal and visuospatial domains. In contrast, FPCN only evidenced a relationship with visuospatial learning, and DMN was not significantly associated with memory measures. These data suggest that CON may be a valuable target in longitudinal studies of age-related memory changes, but also a possible target in future non-invasive interventions to attenuate memory decline in older adults.
We present seismic measurements of the firn column at Korff Ice Rise, West Antarctica, including measurements of compressional-wave velocity and attenuation. We describe a modified spectral-ratio method of measuring the seismic quality factor (Q) based on analysis of diving waves, which, combined with a stochastic method of error propagation, enables us to characterise the attenuative structure of firn in greater detail than has previously been possible. Q increases from 56 ± 23 in the uppermost 12 m to 570 ± 450 between 55 and 77 m depth. We corroborate our method with consistent measurements obtained via primary reflection, multiple, source ghost, and critically refracted waves. Using the primary reflection and its ghost, we find Q = 53 ± 20 in the uppermost 20 m of firn. From the critical refraction, we find Q = 640 ± 400 at 90 m depth. Our method aids the understanding of the seismic structure of firn and benefits characterisation of deeper glaciological targets, providing an alternative means of correcting seismic reflection amplitudes in cases where conventional methods of Q correction may be impossible.
Many clinical trials leverage real-world data. Typically, these data are manually abstracted from electronic health records (EHRs) and entered into electronic case report forms (CRFs), a time and labor-intensive process that is also error-prone and may miss information. Automated transfer of data from EHRs to eCRFs has the potential to reduce data abstraction and entry burden as well as improve data quality and safety.
Methods:
We conducted a test of automated EHR-to-CRF data transfer for 40 participants in a clinical trial of hospitalized COVID-19 patients. We determined which coordinator-entered data could be automated from the EHR (coverage), and the frequency with which the values from the automated EHR feed and values entered by study personnel for the actual study matched exactly (concordance).
Results:
The automated EHR feed populated 10,081/11,952 (84%) coordinator-completed values. For fields where both the automation and study personnel provided data, the values matched exactly 89% of the time. Highest concordance was for daily lab results (94%), which also required the most personnel resources (30 minutes per participant). In a detailed analysis of 196 instances where personnel and automation entered values differed, both a study coordinator and a data analyst agreed that 152 (78%) instances were a result of data entry error.
Conclusions:
An automated EHR feed has the potential to significantly decrease study personnel effort while improving the accuracy of CRF data.