We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The interstage period is a critical phase for single ventricle infants due to their fragile cardiovascular state. Infants often experience medical and feeding challenges during this period, resulting in caregiver stress. We completed a quality improvement project at Children’s Healthcare of Atlanta to understand these challenges to inform targeted interventions.
Methods:
This single-center project included a medical chart review and a cross-sectional caregiver survey. Data were collected on patient and caregiver demographics and clinical variables. Feeding outcomes were assessed using the Pediatric Functional Oral Intake Scale. Caregiver impact was measured using the Feeding/Swallowing Impact Survey.
Results:
The project included 15 single ventricle patients with a mean (standard deviation) age of 151.73(25.92) days at the time of the second-stage palliation. Forty percent of patients experienced at least one readmission, primarily due to feeding intolerance (20%) and desaturations (26.7%). Milk protein allergy (26.9%) was the most common medical complication, followed by interstage unplanned reinterventions. Pediatric Functional Oral Intake Scale scores demonstrated that 33% consumed minimal volumes or no oral intake at the time of the bidirectional Glenn, and 93.3% of patients did not receive outpatient feeding services during the interstage. Caregiver stress scores resulted in mean scores (standard deviation) of 2.23(1.54), with the highest impact on daily activities. All caregivers affirmed the need for a dedicated multidisciplinary clinic.
Conclusion:
The interstage period for single ventricle patients poses significant medical and feeding challenges, resulting in caregiver stress. Comprehensive, multidisciplinary feeding support during the interstage period may improve patient outcomes and alleviate caregiver burden.
Interprofessional teams in the pediatric cardiac ICU consolidate their management plans in pre-family meeting huddles, a process that affects the course of family meetings but often lacks optimal communication and teamwork.
Methods:
Cardiac ICU clinicians participated in an interprofessional intervention to improve how they prepared for and conducted family meetings. We conducted a pretest–posttest study with clinicians participating in huddles before family meetings. We assessed feasibility of clinician enrollment, assessed clinician perception of acceptability of the intervention via questionnaire and semi-structured interviews, and impact on team performance using a validated tool. Wilcoxon rank sum test assessed intervention impact on team performance at meeting level comparing pre- and post-intervention data.
Results:
Totally, 24 clinicians enrolled in the intervention (92% retention) with 100% completion of training. All participants recommend cardiac ICU Teams and Loved ones Communicating to others and 96% believe it improved their participation in family meetings. We exceeded an acceptable level of protocol fidelity (>75%). Team performance was significantly (p < 0.001) higher in post-intervention huddles (n = 30) than in pre-intervention (n = 28) in all domains. Median comparisons: Team structure [2 vs. 5], Leadership [3 vs. 5], Situation Monitoring [3 vs. 5], Mutual Support [ 3 vs. 5], and Communication [3 vs. 5].
Conclusion:
Implementing an interprofessional team intervention to improve team performance in pre-family meeting huddles is feasible, acceptable, and improves team function. Future research should further assess impact on clinicians, patients, and families.
The psychometric rigor of unsupervised, smartphone-based assessments and factors that impact remote protocol engagement is critical to evaluate prior to the use of such methods in clinical contexts. We evaluated the validity of a high-frequency, smartphone-based cognitive assessment protocol, including examining convergence and divergence with standard cognitive tests, and investigating factors that may impact adherence and performance (i.e., time of day and anticipated receipt of feedback vs. no feedback).
Methods:
Cognitively unimpaired participants (N = 120, Mage = 68.8, 68.3% female, 87% White, Meducation = 16.5 years) completed 8 consecutive days of the Mobile Monitoring of Cognitive Change (M2C2), a mobile app-based testing platform, with brief morning, afternoon, and evening sessions. Tasks included measures of working memory, processing speed, and episodic memory. Traditional neuropsychological assessments included measures from the Preclinical Alzheimer’s Cognitive Composite battery.
Results:
Findings showed overall high compliance (89.3%) across M2C2 sessions. Average compliance by time of day ranged from 90.2% for morning sessions, to 77.9% for afternoon sessions, and 84.4% for evening sessions. There was evidence of faster reaction time and among participants who expected to receive performance feedback. We observed excellent convergent and divergent validity in our comparison of M2C2 tasks and traditional neuropsychological assessments.
Conclusions:
This study supports the validity and reliability of self-administered, high-frequency cognitive assessment via smartphones in older adults. Insights into factors affecting adherence, performance, and protocol implementation are discussed.
Remote assessment for cognitive screening and monitoring in the elderly has many potential advantages, including improved convenience/access and ease of repeat testing. As remote testing becomes more feasible and common, it is important to examine what factors might influence performance and adherence with these new methods. Personal beliefs about one’s ability to remember effectively have been shown to impact memory performance, especially in older adults (Lineweaver & Hertzog, 1998). The perception of a low level of personal control over memory may impact a person’s use of memory strategies which might otherwise enhance performance, as well as their beliefs about the efficacy of those strategies (Lineweaver et al., 2021). The present study examined the relationship between perceived memory self-efficacy and performance and adherence on self-administered, smartphonebased remote cognitive assessments.
Participants and Methods:
Participants were 123 cognitively unimpaired adults (ages 55-80, 68.3% female, 87% White, M= 16.5 years of education) recruited from the Butler Hospital Alzheimer’s Prevention Registry as part of an ongoing study evaluating novel cognitive assessment methods. A cutoff of score of ≥34 on the modified Telephone Interview for Cognitive Status (TICSm) was required for enrollment. Perceived memory self-efficacy was assessed using two subscales of the Personal Beliefs about Memory Instrument (PBMI; Lineweaver et al., 1998): “prospective control”, the perception of control one currently has to influence future memory functioning, and “future control”, the perception of the amount of control over memory function one will have in the future. Participants completed three brief self-administered cognitive testing sessions per day for 8 consecutive days using a mobile app-based platform developed as part of the National Institute of Aging’s Mobile Toolbox initiative. Cognitive tasks assessed visual working memory (WM), processing speed (PS), and episodic memory (EM)(see Thompson et al., 2022).
Results:
Statistical analyses were conducted using univariate ANOVA tests to look for main effects of each PBMI subscale score on remote assessment adherence and average performance on each task over 8 days. After adjusting for aging, we found a higher rate of false alarms (proportion of misidentified stimuli) on the WM task was associated with higher levels of both self-reported prospective control (F(2, 86) = 4.188, p = .018) and future control (F(2, 96) = 5.003, p = .009). Increased response time on the PS task was also associated with higher levels of future control when adjusted for aging (F(2, 96) = 6.075, p = .003). There was no main effect of memory self-efficacy ratings on EM. We found no main effects of memory self-efficacy ratings on assessment adherence.
Conclusions:
These findings suggest perceptions of high prospective and future control are associated with positive response bias on a forced-choice WM task, and high perceptions of future control are also associated with slower response times on PS tasks. Future research should examine whether this is due to increased deliberation, cautiousness, or other factors. Limitations include the potentially limited generalizability of this largely White, highly educated, and motivated sample self-selected for AD research. Next steps for this research include comparing these results with the effects of perceived self-efficacy on in-person cognitive assessments.
Intraindividual variability (IIV) is defined as fluctuations in an individual’s cognitive performance over time1. IIV has been identified as a marker of neurobiological disturbance making it a useful method for detecting changes in cognition among cognitively healthy individuals as well as those with prodromal syndromes2. IIV on laboratory-based computerized tasks has been linked with cognitive decline and conversion to mild cognitive impairment (MCI) and/or dementia (Haynes et al., 2017). Associations between IIV and AD risk factors including apolipoprotein (APOE) ε4 carrier status, neurodegeneration seen on brain imaging, and amyloid (Aß) Positron emission tomography (PET) scan status have also been observed1. Recent studies have demonstrated that evaluating IIV on smartphone-based digital cognitive assessments is feasible, has the capacity to differentiate between cognitively normal (CN) and MCI individuals, and may reduce barriers to cognitive assessment3. This study sought to evaluate whether such differences could be detected in CN participants with and without elevated AD risk.
Participants and Methods:
Participants (n=57) were cognitively normal older adults who previously received an Aß PET scan through the Butler Hospital Memory and Aging Program. The sample consisted of primarily non-Hispanic (n=49, 86.0%), White (n=52, 91.2%), college-educated (M=16.65 years), females (n=39, 68.4%). The average age of the sample was 68 years old. Approximately 42% of the sample (n=24) received a positive PET scan result. Participants completed brief cognitive assessments (i.e., 3-4 minutes) three times per day for eight days (i.e., 24 sessions) using the Mobile Monitoring of Cognitive Change (M2C2) application, a mobile app-based cognitive testing platform developed as part of the National Institute of Aging’s Mobile Toolbox initiative (Sliwinski et al., 2018). Participants completed visual working memory, processing speed, and episodic memory tasks on the M2C2 platform. Intraindividual standard deviations (ISDs) across trials were computed for each person at each time point (Hultsch et al., 2000). Higher ISD values indicate more variability in performance. Linear mixed effects models were utilized to examine whether differences in IIV existed based on PET scan status while controlling for age, sex at birth, and years of education.
Results:
n interaction between PET status and time was observed on the processing speed task such that Aß- individuals were less variable over the eight assessment days compared to Aß + individuals (B= -5.79, SE=2.67, p=.04). No main or interaction effects were observed on the visual working memory task or episodic memory task.
Conclusions:
Our finding that Aß- individuals demonstrate less variability over time on a measure of processing speed is consistent with prior work. No associations were found between IIV in other cognitive domains and PET status. As noted by Allaire and Marsiske (2005), IIV is not a consistent phenomenon across different cognitive domains. Therefore, identifying which tests are the most sensitive to early change is crucial. Additional studies in larger, more diverse samples are needed prior to widespread clinical use for early detection of AD.
Routine cognitive screening in the elderly may facilitate earlier diagnosis of neurodegenerative diseases and access to care and resources for patients and families. However, despite growing rates of Alzheimer's and related disorders (ADRD), the availability and implementation of cognitive screening for older adults in the US remains quite limited. Remote cognitive assessment via smartphone app may reduce several barriers to more widespread screening. We examined the validity of a remote app-based cognitive screening protocol in healthy older adults by examining remote task convergence with standard-person assessments and cerebral amyloid (Aß) status as an AD biomarker.
Participants and Methods:
Participants (N =117) were cognitively unimpaired adults aged 60-80 years (67.5% female, 88% White, 75% education > 16 years). A portion had Aß PET imaging results available from prior research participation [(Aß positive (Aß+) n =26, and Aß negative (Aß-) n = 44]. A modified Telephone Interview for Cognitive Status (TICSm) cutoff score of >34 was used to establish unimpaired cognition. Participants completed 8 consecutive assessment days using Mobile Monitoring of Cognitive Change (M2C2), a smartphone app-based testing platform developed as part of the National Institute of Aging's Mobile Toolbox initiative. Brief (i.e., 3-4 minute) M2C2 sessions were assigned daily within morning, afternoon, and evening time windows. Tasks included measures of visual working memory (WM), processing speed (PS), and episodic memory (EM) (see Thompson et al., 2022). Participants then completed a battery of standard neuropsychological assessments in-person at a follow-up visit.
Results:
Participants completed 22.6 (SD = 2.6) out of 24 assigned sessions (3 sessions x 8 days) on average. Performance on all M2C2 tasks decreased significantly with age. Women performed significantly better on WM and EM tasks relative to men. There were no detectable significant differences in performance by race or education. Shorter mean reaction time on M2C2 PS trials predicted faster Trails A and B completion (ß = .26, p < .01, 95% CI [3.8, 23.3] and ß = .20, p < .05, 95% CI [.23, 6.8], respectively). Greater mean M2C2 WM accuracy predicted longer maximum backward digital span (ß = .24, p = .01, 95% CI [.02, .16]). Greater mean M2C2 EM accuracy predicted stronger Logical Memory delayed recall (ß = .33, p < .001, 95% CI [.004, .012]) and total immediate recall on the Free and Cued Selective Reminding Test (ß = .19, p < .05, 95% CI [.000, .006]). Moreover, EM significantly distinguished Aß- and Aß+ individuals (t (68) = 3.0, p < .01) with fair accuracy (AUC = .72).
Conclusions:
Mean performance across 8-days on each M2C2 task predicted same-domain cognitive task performance on a standard assessment battery, with medium effect sizes. Performance on the EM task was also sensitive to cerebral Aß status, consistent with subtle memory changes implicated in the preclinical stage of AD. These findings support the validity of this remote testing protocol in healthy older adults, with implications for future efforts to facilitate accessible and sensitive cognitive screening for early detection of ADRD. Limitations include the restricted generalizability of this primarily white and college educated sample.
Every year in Australia over a thousand children who are born with congenital heart disease require surgical intervention. Vocal cord dysfunction (VCD) can be an unavoidable and potentially devastating complication of surgery for congenital heart disease. Structured, multidisciplinary care pathways help to guide clinical care and reduce mortality and morbidity. An implementation study was conducted to embed a novel, multidisciplinary management pathway into practice using the consolidated framework for implementation research (CFIR). The goal of the pathway was to prepare children with postoperative vocal cord dysfunction to safely commence and transition to oral feeding. Education sessions to support pathway rollout were completed with clinical stakeholders. Other implementation strategies used included adaptation of the pre-procedural pathway to obtain consent, improving the process of identifying patients on the VCD pathway, and nominating a small team who were responsible for the ongoing monitoring of patients following recruitment. Implementation success was evaluated according to compliance with pathway defined management. Our study found that while there were several barriers to pathway adoption, implementation of the pathway was feasible despite pathway adaptations that were required in response to COVID-19.
Autism spectrum disorder (ASD) is a common neurodevelopmental disorder characterized by deficits in social communication and presence of restricted, repetitive behaviors, and interests. However, individuals with ASD vary significantly in their challenges and abilities in these and other developmental domains. Gene discovery in ASD has accelerated in the past decade, and genetic subtyping has yielded preliminary evidence of utility in parsing phenotypic heterogeneity through genomic subtypes. Recent advances in transcriptomics have provided additional dimensions with which to refine genetic subtyping efforts. In the current study, we investigate phenotypic differences among transcriptional subtypes defined by neurobiological spatiotemporal co-expression patterns. Of the four transcriptional subtypes examined, participants with mutations to genes typically expressed highly in all brain regions prenatally, and those with differential postnatal cerebellar expression relative to other brain regions, showed lower cognitive and adaptive skills, higher severity of social communication deficits, and later acquisition of speech and motor milestones, compared to those with mutations to genes highly expressed during the postnatal period across brain regions. These findings suggest higher-order characterization of genetic subtypes based on neurobiological expression patterns may be a promising approach to parsing phenotypic heterogeneity among those with ASD and related neurodevelopmental disorders.
We present the case of a 17-year-old boy with a cardiac venous malformation. This case highlights the diagnostic challenges of such tumours and demonstrates the potential efficacy of a watch-and-wait management approach.
To evaluate whole-genome sequencing (WGS) as a molecular typing tool for MRSA outbreak investigation.
Design
Investigation of MRSA colonization/infection in a neonatal intensive care unit (NICU) over 3 years (2014–2017).
Setting
Single-center level IV NICU.
Patients
NICU infants and healthcare workers (HCWs).
Methods
Infants were screened for MRSA using a swab of the anterior nares, axilla, and groin, initially by targeted (ring) screening, and later by universal weekly screening. Clinical cultures were collected as indicated. HCWs were screened once using swabs of the anterior nares. MRSA isolates were typed using WGS with core-genome multilocus sequence typing (cgMLST) analysis and by pulsed-field gel electrophoresis (PFGE). Colonized and infected infants and HCWs were decolonized. Control strategies included reinforcement of hand hygiene, use of contact precautions, cohorting, enhanced environmental cleaning, and remodeling of the NICU.
Results
We identified 64 MRSA-positive infants: 53 (83%) by screening and 11 (17%) by clinical cultures. Of 85 screened HCWs, 5 (6%) were MRSA positive. WGS of MRSA isolates identified 2 large clusters (WGS groups 1 and 2), 1 small cluster (WGS group 3), and 8 unrelated isolates. PFGE failed to distinguish WGS group 2 and 3 isolates. WGS groups 1 and 2 were codistributed over time. HCW MRSA isolates were primarily in WGS group 1. New infant MRSA cases declined after implementation of the control interventions.
Conclusion
We identified 2 contemporaneous MRSA outbreaks alongside sporadic cases in a NICU. WGS was used to determine strain relatedness at a higher resolution than PFGE and was useful in guiding efforts to control MRSA transmission.
The concept of nobility in the middle ages is the focus of this volume. Embracing regions as diverse as England (before and after the Norman Conquest), Italy, the Iberian peninsula, France, Norway, Poland, Portugal, and the Romano-German empire, it ranges over the whole medieval period from the fifth to the early sixteenth century. The articles confront many of the central issues about the origins and nature of `nobility', its relationship with the late Roman world, its acquisition and exercise of power, its association with military obligation, and its gradual `pacification' and transformation into a more or less willing instrument of royal government (indeed, the symbiotic relationship between royal, or imperial, and noble power is a recurring theme). Other ideas historically linked to the concept of nobility and discussed here are `nobility' itself; the distinction between nobility of birth and nobility of character; chivalry; violence and its effects; and noblewomen as co-progenitors and transmitters of nobility of blood.
Dr ANNE DUGGAN teaches in the Department of History at King's College London.
Whether monozygotic (MZ) and dizygotic (DZ) twins differ from each other in a variety of phenotypes is important for genetic twin modeling and for inferences made from twin studies in general. We analyzed whether there were differences in individual, maternal and paternal education between MZ and DZ twins in a large pooled dataset. Information was gathered on individual education for 218,362 adult twins from 27 twin cohorts (53% females; 39% MZ twins), and on maternal and paternal education for 147,315 and 143,056 twins respectively, from 28 twin cohorts (52% females; 38% MZ twins). Together, we had information on individual or parental education from 42 twin cohorts representing 19 countries. The original education classifications were transformed to education years and analyzed using linear regression models. Overall, MZ males had 0.26 (95% CI [0.21, 0.31]) years and MZ females 0.17 (95% CI [0.12, 0.21]) years longer education than DZ twins. The zygosity difference became smaller in more recent birth cohorts for both males and females. Parental education was somewhat longer for fathers of DZ twins in cohorts born in 1990–1999 (0.16 years, 95% CI [0.08, 0.25]) and 2000 or later (0.11 years, 95% CI [0.00, 0.22]), compared with fathers of MZ twins. The results show that the years of both individual and parental education are largely similar in MZ and DZ twins. We suggest that the socio-economic differences between MZ and DZ twins are so small that inferences based upon genetic modeling of twin data are not affected.
Patient-reported outcomes and epidemiological studies in adults with tetralogy of Fallot are lacking. Recruitment and longitudinal follow-up investigation across institutions is particularly challenging. Objectives of this study were to assess the feasibility of recruiting adult patients with tetralogy of Fallot for a patient-reported outcomes study, describe challenges for recruitment, and create an interactive, online tetralogy of Fallot registry.
Methods
Adult patients living with tetralogy of Fallot, aged 18–58 years, at the University of North Carolina were identified using diagnosis code query. A survey was designed to collect demographics, symptoms, history, and birth mother information. Recruitment was attempted by phone (Part I, n=20) or by email (Part II, n=20). Data analysis included thematic grouping of recruitment challenges and descriptive statistics. Feasibility threshold was 75% for recruitment and for data fields completed per patient.
Results
In Part I, 60% (12/20) were successfully contacted and eight (40%) were enrolled. Demographics and birth mother information were obtained for all enrolled patients. In Part II, 70% (14/20) were successfully contacted; 30% (6/20) enrolled and completed all data fields linked to REDCap database; the median time for survey completion was 8 minutes. Half of the patients had cardiac operations/procedures performed at more than one hospital. Automatic electronic data entry from the online survey was uncomplicated.
Conclusions
Although recruitment (54%) fell below our feasibility threshold, enrolled individuals were willing to complete phone or online surveys. Incorrect contact information, privacy concerns, and patient-reported time constraints were challenges for recruitment. Creating an online survey and linked database is technically feasible and efficient for patient-reported outcomes research.
Despite a reported worldwide increase, the incidence of extended-spectrum β-lactamase (ESBL) Escherichia coli and Klebsiella infections in the United States is unknown. Understanding the incidence and trends of ESBL infections will aid in directing research and prevention efforts.
OBJECTIVE
To perform a literature review to identify the incidence of ESBL-producing E. coli and Klebsiella infections in the United States.
DESIGN
Systematic literature review.
METHODS
MEDLINE via Ovid, CINAHL, Cochrane library, NHS Economic Evaluation Database, Web of Science, and Scopus were searched for multicenter (≥2 sites), US studies published between 2000 and 2015 that evaluated the incidence of ESBL-E. coli or ESBL-Klebsiella infections. We excluded studies that examined resistance rates alone or did not have a denominator that included uninfected patients such as patient days, device days, number of admissions, or number of discharges. Additionally, articles that were not written in English, contained duplicated data, or pertained to ESBL organisms from food, animals, or the environment were excluded.
RESULTS
Among 51,419 studies examined, 9 were included for review. Incidence rates differed by patient population, time, and ESBL definition and ranged from 0 infections per 100,000 patient days to 16.64 infections per 10,000 discharges and incidence rates increased over time from 1997 to 2011. Rates were slightly higher for ESBL-Klebsiella infections than for ESBL-E. coli infections.
CONCLUSION
The incidence of ESBL-E. coli and ESBL-Klebsiella infections in the United States has increased, with slightly higher rates of ESBL-Klebsiella infections. Appropriate estimates of ESBL infections when coupled with other mechanisms of resistance will allow for the appropriate targeting of resources toward research, drug discovery, antimicrobial stewardship, and infection prevention.
To describe the characteristics and impact of Clostridium difficile infection (CDI) in a long-term acute-care hospital (LTACH).
DESIGN
Retrospective matched cohort study.
SETTING
A 38-bed, urban, university-affiliated LTACH.
METHODS
The characteristics of LTACH-onset CDI were assessed among patients hospitalized between July 2008 and October 2015. Patients with CDI were matched to concurrently hospitalized patients without a diagnosis of CDI. Severe CDI was defined as CDI with 2 or more of the following criteria: age ≥65 years, serum creatinine ≥2 mg/dL, or peripheral leukocyte count ≥20,000 cells/μL. A conditional Poisson regression model was developed to determine characteristics associated with a composite primary outcome of 30-day readmission to an acute-care hospital, or mortality.
RESULTS
The overall incidence of CDI was 21.4 cases per 10,000 patient days, with 27% of infections classified as severe. Patients with CDI had a mean age of 70 years (SD, 14 years), a mean Charlson comorbidity index of 3.6 (SD, 2.0), a median length of stay of 33 days (interquartile range [IQR], 24–45 days), and a median time between admission and CDI diagnosis of 16 days (IQR, 9–23 days). The most commonly prescribed antibiotic preceding a CDI diagnosis was a cephalosporin, with median duration of 8 days (IQR, 4–14 days). In multivariate analysis, CDI was not significantly associated with the primary outcome (relative risk, 0.97; 95% CI, 0.59–1.58).
CONCLUSIONS
Incidence of CDI in an urban, university-affiliated LTACH was high. Future research should focus on infection prevention measures to decrease the burden of CDI in this complex patient population.