We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Neuropsychiatry training in the UK currently lacks a formal scheme or qualification, and its demand and availability have not been systematically explored. We conducted the largest UK-wide survey of psychiatry trainees to examine their experiences in neuropsychiatry training.
Results
In total, 185 trainees from all UK training regions completed the survey. Although 43.6% expressed interest in a neuropsychiatry career, only 10% felt they would gain sufficient experience by the end of training. Insufficient access to clinical rotations was the most common barrier, with significantly better access in London compared with other regions. Most respondents were in favour of additional neurology training (83%) and a formal accreditation in neuropsychiatry (90%).
Clinical implications
Strong trainee interest in neuropsychiatry contrasts with the limited training opportunities currently available nationally. Our survey highlights the need for increased neuropsychiatry training opportunities, development of a formalised training programme and a clinical accreditation pathway for neuropsychiatry in the UK.
Shift workers in Australia constitute approximately 16% of the workforce, with nearly half working a rotating shift pattern(1). Whilst poor dietary habits of shift workers have been extensively reported, along with increased risk of metabolic health conditions such as obesity, cardiovascular disease and diabetes compared to non-shift workers(2,3,4), studies on shift working populations rarely control for individual and lifestyle factors that might influence dietary profiles. While rotating shift work schedules have been linked with higher energy intake than daytime schedules(5), little is known about the impact of different night shift schedules (e.g., fixed night vs rotating schedules) on the diets of shift workers, including differences in 24-hour energy intake and nutrient composition. This observational study investigated the dietary habits of night shift workers with overweight/obesity and compared the impact of rotating and fixed night shift schedules on dietary profiles. The hypothesis was posited, that shift workers’ diets overall would deviate from national nutrition recommendations, and those working rotating shift schedules compared with fixed night schedules would have higher energy consumption. Participants were from the Shifting Weight using Intermittent Fasting in night shift workers (SWIFt) trial, a randomised controlled weight loss trial, and provided 7-day food diaries upon enrolment. Mean energy intakes (EI) and the percentage of EI from macronutrients, fibre, saturated fat, added sugar, alcohol, and the amount of sodium were evaluated against Australian adult recommendations. Total group and subgroup analysis of fixed night vs rotating schedules’ dietary profiles were conducted, including assessment of plausible and non-plausible energy intake reporters. Hierarchical regression analysis were conducted on nutrient intakes, controlling for individual and lifestyle factors of age, gender, BMI, physical activity, shift work exposure, occupation and work schedule. Overall, night shift workers (n = 245) had diets characterised by high fat/saturated fat/sodium content and low carbohydrate/fibre intake compared to nutrition recommendations, regardless of shift schedule type. Rotating shift workers (n = 121) had a higher mean 24-hour EI than fixed night workers (n = 122) (9329 ± 2915 kJ vs 8025 ± 2383 kJ, p < 0.001), with differences remaining when only plausible EI reporters were included (n = 130) (10968 ± 2411 kJ vs 9307 ± 2070 kJ, p < 0.001). These findings highlight poor dietary choices among this population of shift workers, and higher energy intakes of rotating shift workers, which may contribute to poor metabolic health outcomes often associated with working nightshift.
Approximately 15% of Australia’s workforce are shift workers, who are at greater risk for obesity and related conditions, such as type 2 diabetes and cardiovascular disease.(1,2,3) While current guidelines for obesity management prioritise diet-induced weight loss as a treatment option, there are limited weight-loss studies involving night shift workers and no current exploration of the factors associated with engagement in weight-loss interventions. The Shifting Weight using Intermittent Fasting in night shift workers (SWIFt) study was a randomised controlled trial that compared three, 24-week weight-loss interventions: continuous energy restriction (CER), and 500-calorie intermittent fasting (IF) for 2-days per week; either during the day (IF:2D), or the night shift (IF:2N). This current study provided a convergent, mixed methods, experimental design to: 1) explore the relationship between participant characteristics, dietary intervention group and time to drop out for the SWIFt study (quantitative); and 2) understand why some participants are more likely to drop out of the intervention (qualitative). Participant characteristics included age, gender, ethnicity, occupation, shift schedule, number of night shifts per four weeks, number of years in shift work, weight at baseline, weight change at four weeks, and quality of life at baseline. A Cox regression model was used to specify time to drop out from the intervention as the dependent variable and purposive selection was used to determine predictors for the model. Semi-structured interviews at baseline and 24-weeks were conducted and audio diaries every two weeks were collected from participants using a maximum variation sampling approach, and analysed using the five steps of framework analysis.(4) A total of 250 participants were randomised to the study between October 2019 and February 2022. Two participants were excluded from analysis due to retrospective ineligibility. Twenty-nine percent (n = 71) of participants dropped out of the study over the 24-week intervention. Greater weight at baseline, fewer years working shift work, lower weight change at four weeks, and women compared to men were associated with a significant increased rate of drop out from the study (p < 0.05). Forty-seven interviews from 33 participants were conducted and 18 participants completed audio diaries. Lack of time, fatigue and emotional eating were barriers more frequently reported by women. Participants with a higher weight at baseline more frequently reported fatigue and emotional eating barriers, and limited guidance on non-fasting days as a barrier for the IF interventions. This study provides important considerations for refining shift-worker weight-loss interventions for future implementation in order to increase engagement and mitigate the adverse health risks experienced by this essential workforce.
Underrepresentation of diverse populations in medical research undermines generalizability, exacerbates health disparities, and erodes trust in research institutions. This study aimed to identify a suitable survey instrument to measure trust in medical research among Black and Latino communities in Baltimore, Maryland.
Methods:
Based on a literature review, a committee selected two validated instruments for community evaluation: Perceptions of Research Trustworthiness (PoRT) and Trust in Medical Researchers (TiMRs). Both were translated into Spanish through a standardized process. Thirty-four individuals participated in four focus groups (two in English, two in Spanish). Participants reviewed and provided feedback on the instruments’ relevance and clarity. Discussions were recorded, transcribed, and analyzed thematically.
Results:
Initial reactions to the instruments were mixed. While 68% found TiMR easier to complete, 74% preferred PoRT. Key discussion themes included the relevance of the instrument for measuring trust, clarity of the questions, and concerns about reinforcing negative perceptions of research. Participants felt that PoRT better aligned with the research goal of measuring community trust in research, though TiMR was seen as easier to understand. Despite PoRT’s lower reading level, some items were found to be more confusing than TiMR items.
Conclusion:
Community feedback highlighted the need to differentiate trust in medical research, researchers, and institutions. While PoRT and TiMR are acceptable instruments for measuring trust in medical research, refinement of both may be beneficial. Development and validation of instruments in multiple languages is needed to assess community trust in research and inform strategies to improve diverse participation in research.
The fossil record of dinosaurs in Scotland mostly comprises isolated highly fragmentary bones from the Great Estuarine Group in the Inner Hebrides (Bajocian–Bathonian). Here we report the first definite dinosaur body fossil ever found in Scotland (historically), having been discovered in 1973, but not collected until 45 years later. It is the first and most complete partial dinosaur skeleton currently known from Scotland. NMS G.2023.19.1 was recovered from a challenging foreshore location in the Isle of Skye, and transported to harbour in a semi-rigid inflatable boat towed by a motor boat. After manual preparation, micro-CT scanning was carried out, but this did not aid in identification. Among many unidentifiable elements, a neural arch, two ribs and part of the ilium are described herein, and their features indicate that this was a cerapodan or ornithopod dinosaur. Histological thin sections of one of the ribs support this identification, indicating an individual at least eight years of age, growing slowly at the time of death. If ornithopodan, as our data suggest, it could represent the world's oldest body fossil of this clade.
Regression is a fundamental prediction task common in data-centric engineering applications that involves learning mappings between continuous variables. In many engineering applications (e.g., structural health monitoring), feature-label pairs used to learn such mappings are of limited availability, which hinders the effectiveness of traditional supervised machine learning approaches. This paper proposes a methodology for overcoming the issue of data scarcity by combining active learning (AL) for regression with hierarchical Bayesian modeling. AL is an approach for preferentially acquiring feature-label pairs in a resource-efficient manner. In particular, the current work adopts a risk-informed approach that leverages contextual information associated with regression-based engineering decision-making tasks (e.g., inspection and maintenance). Hierarchical Bayesian modeling allow multiple related regression tasks to be learned over a population, capturing local and global effects. The information sharing facilitated by this modeling approach means that information acquired for one engineering system can improve predictive performance across the population. The proposed methodology is demonstrated using an experimental case study. Specifically, multiple regressions are performed over a population of machining tools, where the quantity of interest is the surface roughness of the workpieces. An inspection and maintenance decision process is defined using these regression tasks, which is in turn used to construct the active-learning algorithm. The novel methodology proposed is benchmarked against an uninformed approach to label acquisition and independent modeling of the regression tasks. It is shown that the proposed approach has superior performance in terms of expected cost—maintaining predictive performance while reducing the number of inspections required.
Wetlands in hypersaline environments are especially vulnerable to loss and degradation, as increasing coastal urbanization and climate change rapidly exacerbate freshwater supply stressors. Hypersaline wetlands pose unique management challenges that require innovative restoration perspectives and approaches that consider complex local and regional socioecological dynamics. In part, this challenge stems from multiple co-occurring stressors and anthropogenic alterations, including estuary mouth closure and freshwater diversions at the catchment scale. In this article, we discuss challenges and opportunities in the restoration of hypersaline coastal wetland systems, including management of freshwater inflow, shoreline modification, the occurrence of concurrent or sequential stressors, and the knowledge and values of stakeholders and Indigenous peoples. Areas needing additional research and integration into practice are described, and paths forward in adaptive management are discussed. There is a broad need for actionable research on adaptively managing hypersaline wetlands, where outputs will enhance the sustainability and effectiveness of future restoration efforts. Applying a collaborative approach that integrates best practices across a diversity of socio-ecological settings will have global benefits for the effective management of hypersaline coastal wetlands.
This paper considers multiple imputation (MI) approaches for handling non-monotone missing longitudinal binary responses when estimating parameters of a marginal model using generalized estimating equations (GEE). GEE has been shown to yield consistent estimates of the regression parameters for a marginal model when data are missing completely at random (MCAR). However, when data are missing at random (MAR), the GEE estimates may not be consistent; the MI approaches proposed in this paper minimize bias under MAR. The first MI approach proposed is based on a multivariate normal distribution, but with the addition of pairwise products among the binary outcomes to the multivariate normal vector. Even though the multivariate normal does not impute 0 or 1 values for the missing binary responses, as discussed by Horton et al. (Am Stat 57:229–232, 2003), we suggest not rounding when filling in the missing binary data because it could increase bias. The second MI approach considered is the fully conditional specification (FCS) approach. In this approach, we specify a logistic regression model for each outcome given the outcomes at other time points and the covariates. Typically, one would only include main effects of the outcome at the other times as predictors in the FCS approach, but we explore if bias can be reduced by also including pairwise interactions of the outcomes at other time point in the FCS. In a study of asymptotic bias with non-monotone missing data, the proposed MI approaches are also compared to GEE without imputation. Finally, the proposed methods are illustrated using data from a longitudinal clinical trial comparing four psychosocial treatments from the National Institute on Drug Abuse Collaborative Cocaine Treatment Study, where patients’ cocaine use is collected monthly for 6 months during treatment.
Oysters have unique life history strategies among molluscs and a long history in the fossil record. The Ostreid form, particularly species from the genus Crassostrea, facilitated the invasion into intertidal, estuarine habitats and reef formation. While there is general acknowledgement that oysters have highly variable growth, few studies have quantified variability in oyster allometry. This project aimed to (1) describe the proportional carbonate contributions from each valve and (2) examine length–weight relationships for shell and tissue across an estuarine gradient. We collected 1122 C. virginica from 48 reefs in eight tributaries and the main stem of the Virginia portion of the Chesapeake Bay. On average, the left valve was responsible for 56% of the total weight of the shell, which was relatively consistent across a size range (24.9–172 mm). Nonlinear mixed-effects models for oyster length–weight relationships suggest oysters exhibit allometric growth (b < 3) and substantial inter-reef variation, where upriver reefs in some tributaries appear to produce less shell and tissue biomass on average for a given size. We posit this variability may be due to differences in local conditions, particularly salinity, turbidity, and reef density. Allometric growth maximizes shell production and surface area for oyster settlement, both of which contribute to maintaining the underlying reef structure. Rapid growth and intraspecific plasticity in shell morphology enabled oysters to invade and establish reefs as estuaries moved in concert with changes in sea level over evolutionary time.
Black and Latino individuals are underrepresented in COVID-19 treatment and vaccine clinical trials, calling for an examination of factors that may predict willingness to participate in trials.
Methods:
We administered the Common Survey 2.0 developed by the Community Engagement Alliance (CEAL) Against COVID-19 Disparities to 600 Black and Latino adults in Baltimore City, Prince George’s County, Maryland, Montgomery County, Maryland, and Washington, DC, between October and December 2021. We examined the relationship between awareness of clinical trials, social determinants of health challenges, trust in COVID-19 clinical trial information sources, and willingness to participate in COVID-19 treatment and vaccine trials using multinomial regression analysis.
Results:
Approximately half of Black and Latino respondents were unwilling to participate in COVID-19 treatment or vaccine clinical trials. Results showed that increased trust in COVID-19 clinical trial information sources and trial awareness were associated with greater willingness to participate in COVID-19 treatment and vaccine trials among Black and Latino individuals. For Latino respondents, having recently experienced more challenges related to social determinants of health was associated with a decreased likelihood of willingness to participate in COVID-19 vaccine trials.
Conclusions:
The willingness of Black and Latino adults to participate in COVID-19 treatment and vaccine clinical trials is influenced by trial awareness and trust in trial information sources. Ensuring the inclusion of these communities in clinical trials will require approaches that build greater awareness and trust.
According to International Union for the Conservation of Nature (IUCN) guidelines, all species must be assessed against all criteria during the Red Listing process. For organismal groups that are diverse and understudied, assessors face considerable challenges in assembling evidence due to difficulty in applying definitions of key terms used in the guidelines. Challenges also arise because of uncertainty in population sizes (Criteria A, C, D) and distributions (Criteria A2/3/4c, B). Lichens, which are often small, difficult to identify, or overlooked during biodiversity inventories, are one such group for which specific difficulties arise in applying Red List criteria. Here, we offer approaches and examples that address challenges in completing Red List assessments for lichens in a rapidly changing arena of data availability and analysis strategies. While assessors still contend with far from perfect information about individual species, we propose practical solutions for completing robust assessments given the currently available knowledge of individual lichen life-histories.
The goals of this investigation were to 1) identify and measure exposures inside homes of individuals with chemical intolerance (CI), 2) provide guidance for reducing these exposures, and 3) determine whether our environmental house calls (EHCs) intervention could reduce both symptoms and measured levels of indoor air contaminants.
Background:
CI is an international public health and clinical concern, but few resources are available to address patients’ often disabling symptoms. Numerous studies show that levels of indoor air pollutants can be two to five (or more) times higher than outdoor levels. Fragranced consumer products, including cleaning supplies, air fresheners, and personal care products, are symptom triggers commonly reported by susceptible individuals.
Methods:
A team of professionals trained and led by a physician/industrial hygienist and a certified indoor air quality specialist conducted a series of 5 structured EHCs in 37 homes of patients reporting CI.
Results:
We report three case studies demonstrating that an appropriately structured home intervention can teach occupants how to reduce indoor air exposures and associated symptoms. Symptom improvement, documented using the Quick Environmental Exposure and Sensitivity Inventory Symptom Star, corresponded with the reduction of indoor air volatile organic compounds, most notably fragrances. These results provide a deeper dive into 3 of the 37 cases described previously in Perales et al. (2022).
Discussion:
We address the long-standing dilemma that worldwide reports of fragrance sensitivity have not previously been confirmed by human or animal challenge studies. Our ancient immune systems’ ‘first responders’, mast cells, which evolved 500 million years ago, can be sensitized by synthetic organic chemicals whose production and use have grown exponentially since World War II. We propose that these chemicals, which include now-ubiquitous fragrances, trigger mast cell degranulation and inflammatory mediator release in the olfactory-limbic tract, thus altering cerebral blood flow and impairing mood, memory, and concentration (often referred to as ‘brain fog’). The time has come to translate these research findings into clinical and public health practice.
During the 2018 K$\unicode{x012B}$lauea lower East Rift Zone eruption, lava from 24 fissures inundated more than 8000 acres of land, destroying more than 700 structures over three months. Eruptive activity eventually focused at a single vent characterized by a continuously fed lava pond that was drained by a narrow spillway into a much wider, slower channelized flow. The spillway exhibited intervals of ‘pulsing’ behaviour in which the lava depth and velocity were observed to oscillate on time scales of several minutes. At the time, this was attributed to variations in vesiculation originating at depth. Here, we construct a toy fluid dynamical model of the pond–spillway system, and present an alternative hypothesis in which pulsing is generated at the surface, within this system. We posit that the appearance of pulsing is due to a supercritical Hopf bifurcation driven by an increase in the Reynolds number. Asymptotics for the limit cycle near the bifurcation point are derived with averaging methods and compare favourably with the cycle periodicity. Because oscillations in the pond were not observable directly due to the elevation of the cone rim and an obscuring volcanic plume, we model the observations using a spatially averaged Saint-Venant model of the spillway forced by the pond oscillator. The predicted spillway cycle periodicity and waveforms compare favourably with observations made during the eruption. The unusually well-documented nature of this eruption enables estimation of the viscosity of the erupting lava.
OBJECTIVES/GOALS: Prematurity and perinatal brain injury are known risk factors for strabismus. In this study, we sought to understand the link between neonatal neuroimaging measures in very preterm infants and the emergence of strabismus later in life. Study findings may inform if neonatal brain MRI could serve as a prognostic tool for this visual disorder. METHODS/STUDY POPULATION: This study draws from a longitudinal cohort of very preterm infants (VPT, < 30 weeks gestation, range 23 – 29 weeks) who underwent an MRI scan at 36 to 43 weeks postmenstrual age (PMA). Anatomic and diffusion MRI data were collected for each child . A subset of thirty-three patients in this cohort had records of an eye exam, which were reviewed for a history of strabismus. Patients with MRI scans demonstrating cystic periventricular leukomalacia or grade III/IV intraventricular hemorrhage were classified as having brain injury. Clinical variables with a known association to strabismus or diffusion metrics were included in a multivariable logistic regression model. Diffusion tractography metrics were screened for association with strabismus on univariable analysis prior to inclusion in the regression model. RESULTS/ANTICIPATED RESULTS: A total of 17/33 (51.5%) patients developed strabismus. A logistic regression model including gestational age, PMA at MRI, retinopathy of prematurity (ROP) stage, brain injury, and fractional anisotropy of the right optic radiation was significant at the .001 level according to the chi-square statistic. The model predicted 88% of responses correctly. Each decrease of 0.01 in the fractional anisotropy of the right optic radiation increased the odds of strabismus by a factor of 1.5 (95% CI 1.03 – 2.06; p = .03). Patients with brain injury had 15.8 times higher odds of strabismus (95% CI 1.1 – 216.5; p = .04). Gestational age (OR 1.7; 95% CI 0.9 – 3.3; p = .1) and stage of ROP (OR 0.6; 95% CI 0.2 – 2.0; p = .4) were not significant predictors of strabismus in the multivariable model. DISCUSSION/SIGNIFICANCE: Our findings suggest that strabismus in VPT patients may be related to specific changes in brain structure in the neonatal period. The identified association between neonatal optic radiation microstructure and strabismus supports the possibility of using brain MRI in very preterm infants to prognosticate visual and ocular morbidity.
Despite the growing availability of sensing and data in general, we remain unable to fully characterize many in-service engineering systems and structures from a purely data-driven approach. The vast data and resources available to capture human activity are unmatched in our engineered world, and, even in cases where data could be referred to as “big,” they will rarely hold information across operational windows or life spans. This paper pursues the combination of machine learning technology and physics-based reasoning to enhance our ability to make predictive models with limited data. By explicitly linking the physics-based view of stochastic processes with a data-based regression approach, a derivation path for a spectrum of possible Gaussian process models is introduced and used to highlight how and where different levels of expert knowledge of a system is likely best exploited. Each of the models highlighted in the spectrum have been explored in different ways across communities; novel examples in a structural assessment context here demonstrate how these approaches can significantly reduce reliance on expensive data collection. The increased interpretability of the models shown is another important consideration and benefit in this context.
The use of porous filters is indispensable in laboratory- and field-scale diffusion studies, where sample confinement is needed for mechanical reasons. Examples are diffusion studies with compacted swelling clays or brittle clay stones. Knowledge of the diffusion properties of these filters is important in cases where they contribute significantly to the overall diffusive resistance in the experimental setup. In the present study, measurements of effective diffusion coefficients (Db) in porous, stainless steel filter discs are reported for tritiated H2O (HTO), 22Na+, Cs+, and Sr2+ before and after use of the filters in diffusion experiments with different clay minerals. The Db values for used filters were found to be less than those of the as-received filters by ∼30–50%. The Db values measured for the diffusion of HTO, 22Na+, Cs+, and Sr2+ in unused and used stainless steel filter discs correlated fairly well with the respective molecular diffusion coefficients in bulk water. Although such correlations are inherently associated with some uncertainties, they allow reasonable estimates to be made for diffusants for which no Db values are available. For the first time, a procedure is outlined that allows an integrative assessment to be made for the impact of the uncertainties in the filter diffusion properties on the combined standard uncertainties of the diffusion parameters obtained from through-diffusion experiments. This procedure can be used in the design and optimization of through-diffusion experiments in which the diffusive resistance of the porous filters must not be ignored. Shown here, as a general rule of thumb, is that, if the effective diffusion coefficient in the porous filter is at least three times larger than that in the clay, the choice of geometrical boundary conditions is rather uncritical, as long as the thickness of the clay sample is greater than that of the porous filters.
Construction of predictive algorithms of concussion symptom recovery at 4 and 12 weeks post-injury using an evidence-based assessment (EBA) model to guide clinical decision-making, extending the 2016 5P decision rule.
Participants and Methods:
Children and adolescents, ages 8-18 (n=1,551; mean age=12.78; 62% male), followed over 12 weeks in the prospective multicenter cohort study (Predicting Persistent Post-Concussive Problems in Pediatrics, 5P; Zemek et al., 2016). The age-specific PostConcussion Symptom Inventory (PCSI) (8-12, 17 items; 1318 years, 20 items) was completed at six timepoints from the ED and at 1, 2, 4, 8, and 12-weeks post-injury. Logistic regression analysis was applied to the set of key variables including the PCSI Total Retrospective-Adjusted PostInjury Difference (RAPID) scores, patient demographics and pre-injury history, and injury characteristics to predict participant recovery status (Recovered, Not Recovered) at the 4- and 12-week endpoints. The resulting recovery-predictive equations identified the significant sets of variables with symptom scores at four successive post-injury timepoints (ED, 1, 2, 4 weeks). Logistic Regression Threshold values were established at the 90th CI against which individual patient data was applied to determine recovery status. Participants with sub-threshold sums were deemed recovered at the target endpoint (4- or 12-weeks post-injury).
Results:
A total of 19 predictive equations were generated for the two age groups across the recovery timeline. Four sets of equations were developed to predict symptom recovery status at 4-weeks post-injury for the two age groups (8-12 AUC=0.679-0.884; 13-18 AUC=0.752-0.909). Prediction of symptom recovery status at 12-weeks post-injury yielded six equations for the 8-12 age group (AUC=0.723-0.825), and five equations for the 13-18 age group (AUC=0.724-0.887). Total PCSI RAPID score was identified as a significant variable in each of these 19 equations. Participant sex was identified as significant in 18 of the 19 constructed equations. Other variables that were identified as significant at varying timepoints included age, pre-injury history of learning disability and migraines, and an early post-injury sign in the ED (answering questions more slowly than usual). Examples of the equations include: Week 1 predicting symptom recovery status at 4-weeks: 8-12 yr group-(Sex*.802)+(week 1 Total RAPID Score*.142)+(Age2* .053)+(-3.851) with AUC=0.808; 13-18 yr group-(Sex*.980)+(Week 1 Total RAPID Score*.071)+(-3.261) with AUC=0.861.
Conclusions:
Clinicians’ management of the concussion recovery of children and adolescents can benefit from EBA guidance. The 5P dataset (Zemek et al., 2016) provides an important window into “typical” and “atypical” recovery trajectories, establishing an initial predictive decision rule for a 4-week recovery endpoint, at the ED timepoint only, reporting AUC=0.69. The current study extends the prediction modeling using successive post-injury timepoints reflecting a typical management timeline. Symptom reports from both 1- and 2-weeks post injury with patient demographics/ history predicted symptom recovery status at 4- and 12-weeks post-injury, significantly improve predictive accuracy over the ED timepoint alone. These predictive equations, when applied to the individual patient, can serve to assist the clinician’s understanding of the patients’ recovery trajectory, i.e., on track for a typical or atypical recovery, further informing the intervention strategy.
Face-to-face administration is the “gold standard” for both research and clinical cognitive assessments. However, many factors may impede or prevent face-to-face assessments, including distance to clinic, limited mobility, eyesight, or transportation. The COVID19 pandemic further widened gaps in access to care and clinical research participation. Alternatives to face-to-face assessments may provide an opportunity to alleviate the burden caused by both the COVID-19 pandemic and longer standing social inequities. The objectives of this study were to develop and assess the feasibility of a telephone- and video-administered version of the Uniform Data Set (UDS) v3 cognitive batteries for use by NIH-funded Alzheimer’s Disease Research Centers (ADRCs) and other research programs.
Participants and Methods:
Ninety-three individuals (M age: 72.8 years; education: 15.6 years; 72% female; 84% White) enrolled in our ADRC were included. Their most recent adjudicated cognitive status was normal cognition (N=44), MCI (N=35), mild dementia (N=11) or other (N=3). They completed portions of the UDSv3 cognitive battery, plus the RAVLT, either by telephone or video-format within approximately 6 months (M:151 days) of their annual in-person visit, where they completed the same in-person cognitive assessments. Some measures were substituted (Oral Trails for TMT; Blind MoCA for MoCA) to allow for phone administration. Participants also answered questions about the pleasantness, difficulty level, and preference for administration mode. Cognitive testers provided ratings of perceived validity of the assessment. Participants’ cognitive status was adjudicated by a group of cognitive experts blinded to most recent inperson cognitive status.
Results:
When results from video and phone modalities were combined, the remote assessments were rated as pleasant as the inperson assessment by 74% of participants. 75% rated the level of difficulty completing the remote cognitive assessment the same as the in-person testing. Overall perceived validity of the testing session, determined by cognitive assessors (video = 92%; phone = 87.5%), was good. There was generally good concordance between test scores obtained remotely and in-person (r = .3 -.8; p < .05), regardless of whether they were administered by phone or video, though individual test correlations differed slightly by mode. Substituted measures also generally correlated well, with the exception of TMT-A and OTMT-A (p > .05). Agreement between adjudicated cognitive status obtained remotely and cognitive status based on in-person data was generally high (78%), with slightly better concordance between video/in-person (82%) vs phone/in-person (76%).
Conclusions:
This pilot study provided support for the use of telephone- and video-administered cognitive assessments using the UDSv3 among individuals with normal cognitive function and some degree of cognitive impairment. Participants found the experience similarly pleasant and no more difficult than inperson assessment. Test scores obtained remotely correlated well with those obtained in person, with some variability across individual tests. Adjudication of cognitive status did not differ significantly whether it was based on data obtained remotely or in-person. The study was limited by its’ small sample size, large test-retest window, and lack of randomization to test-modality order. Current efforts are underway to more fully validate this battery of tests for remote assessment. Funded by: P30 AG072947 & P30 AG049638-05S1
External validation of symptom severity classification levels for the PostConcussion Symptom Inventory (PCSI).
Participants and Methods:
Two distinct samples of parents and children, ages 8-18, participated from a: (1) prospective multicenter cohort study (Predicting Persistent Post-concussive Problems in Pediatrics, 5P) (Zemek et al., 2016), including parents (n=2,852), adolescents (n=1,087; mean age=15.13; 54% male), and children (n=1,271; mean age=10.70; 65% male) and (2) published clinic sample at Children’s National Hospital (CN) including parents (n=1,197; adolescents, n=835; children, n=326) (Gioia et al., 2019). Participants completed the age-specific Post-Concussion Symptom Inventory (PCSI): Mean time postinjury = 8 hours (5P), 6 days (PCSI2), generating a post-pre-injury difference (RAPID) score. The distribution of the RAPID scores for the Total Symptom and 4 subscales (physical, emotional, cognitive, sleep/fatigue) were examined to define 4 symptom severity classification levels (minimal - within the CI for recovered, low <20th %tile, moderate 21-79th %tile, high >80th %tile) for the respective samples. These severity distributions were compared between the two distinct datasets.
Results:
ANOVAs were performed to examine group differences in the mean scores for each of the 4 classification levels. No significant differences were found for all the RAPID score distributions with minimal effect sizes (<.1% variance) for the parents, adolescents and children. PCSI RAPID Total Score ranges for the severity classifications were as follows: Minimal-Parent and adolescent groups 5P<=5, Clinic <=5; Children: 5P<=3, Clinic<=3; Low- Parents 5P 6-15, Clinic 6-13; Adolescents 5P 6-19, Clinic 6-16; Children: 5P 4-7, Clinic: 4-7; Moderate-Parents 5P 16-49, Clinic 14-47; Adolescents 5P 20-56, Clinic 17-51; Children 5P 8-17, Clinic: 818; High- Parents: 5P>=50, Clinic >=48; Adolescents 5P >=57, Clinic >=52; Children 5P >=18, Clinic >=19).
Conclusions:
Our findings reveal a parallel distribution of RAPID scores in the two distinct 5P and Clinic patient populations, yielding nearly identical severity classification level parameters across all five PCSI symptom domains (total score, physical, cognitive, emotional, and sleep/fatigue). The present investigation provides evidence of validity for the use of these severity classification levels across the ED and specialty clinic settings.