We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It remains unclear which individuals with subthreshold depression benefit most from psychological intervention, and what long-term effects this has on symptom deterioration, response and remission.
Aims
To synthesise psychological intervention benefits in adults with subthreshold depression up to 2 years, and explore participant-level effect-modifiers.
Method
Randomised trials comparing psychological intervention with inactive control were identified via systematic search. Authors were contacted to obtain individual participant data (IPD), analysed using Bayesian one-stage meta-analysis. Treatment–covariate interactions were added to examine moderators. Hierarchical-additive models were used to explore treatment benefits conditional on baseline Patient Health Questionnaire 9 (PHQ-9) values.
Results
IPD of 10 671 individuals (50 studies) could be included. We found significant effects on depressive symptom severity up to 12 months (standardised mean-difference [s.m.d.] = −0.48 to −0.27). Effects could not be ascertained up to 24 months (s.m.d. = −0.18). Similar findings emerged for 50% symptom reduction (relative risk = 1.27–2.79), reliable improvement (relative risk = 1.38–3.17), deterioration (relative risk = 0.67–0.54) and close-to-symptom-free status (relative risk = 1.41–2.80). Among participant-level moderators, only initial depression and anxiety severity were highly credible (P > 0.99). Predicted treatment benefits decreased with lower symptom severity but remained minimally important even for very mild symptoms (s.m.d. = −0.33 for PHQ-9 = 5).
Conclusions
Psychological intervention reduces the symptom burden in individuals with subthreshold depression up to 1 year, and protects against symptom deterioration. Benefits up to 2 years are less certain. We find strong support for intervention in subthreshold depression, particularly with PHQ-9 scores ≥ 10. For very mild symptoms, scalable treatments could be an attractive option.
Fear learning is a core component of conceptual models of how adverse experiences may influence psychopathology. Specifically, existing theories posit that childhood experiences involving childhood trauma are associated with altered fear learning processes, while experiences involving deprivation are not. Several studies have found altered fear acquisition in youth exposed to trauma, but not deprivation, although the specific patterns have varied across studies. The present study utilizes a longitudinal sample of children with variability in adversity experiences to examine associations among childhood trauma, fear learning, and psychopathology in youth.
Methods
The sample includes 170 youths aged 10–13 years (M = 11.56, s.d. = 0.47, 48.24% female). Children completed a fear conditioning task while skin conductance responses (SCR) were obtained, which included both acquisition and extinction. Childhood trauma and deprivation severity were measured using both parent and youth report. Symptoms of anxiety, externalizing problems, and post-traumatic stress disorder (PTSD) were assessed at baseline and again two-years later.
Results
Greater trauma-related experiences were associated with greater SCR to the threat cue (CS+) relative to the safety cue (CS−) in early fear acquisition, controlling for deprivation, age, and sex. Deprivation was unrelated to fear learning. Greater SCR to the threat cue during early acquisition was associated with increased PTSD symptoms over time controlling for baseline symptoms and mediated the relationship between trauma and prospective changes in PTSD symptoms.
Conclusions
Childhood trauma is associated with altered fear learning in youth, which may be one mechanism linking exposure to violence with the emergence of PTSD symptoms in adolescence.
Structured processes to improve the quality and impact of clinical and translational research are a required element of the Clinical and Translational Sciences Awards (CTSA) program and are central to awardees’ strategic management efforts. Quality improvement is often assumed to be an ordinary consequence of evaluation programs, in which standardized metrics are tabulated and reported externally. Yet evaluation programs may not actually be very effective at driving quality improvement: required metrics may lack direct relevance; they lack incentive to improve on areas of relative strength; and the validity of inter-site comparability may be limited. In this article, we describe how we convened leaders at our CTSA hub in an iterative planning process to improve the quality of our CTSA program by intentionally focusing on how data collection activities can primarily advance continuous quality improvement (CQI) rather than strictly serve as evaluative tools. We describe our CQI process, which consists of three key components: (1) Logic models outlining goals and associated mechanisms; (2) relevant metrics to evaluate performance improvement opportunities; and (3) an interconnected and collaborative CQI framework that defines actions and timelines to enhance performance.
The locus coeruleus (LC) innervates the cerebrovasculature and plays a crucial role in optimal regulation of cerebral blood flow. However, no human studies to date have examined links between these systems with widely available neuroimaging methods. We quantified associations between LC structural integrity and regional cortical perfusion and probed whether varying levels of plasma Alzheimer’s disease (AD) biomarkers (Aß42/40 ratio and ptau181) moderated these relationships.
Participants and Methods:
64 dementia-free community-dwelling older adults (ages 55-87) recruited across two studies underwent structural and functional neuroimaging on the same MRI scanner. 3D-pCASL MRI measured regional cerebral blood flow in limbic and frontal cortical regions, while T1-FSE MRI quantified rostral LC-MRI contrast, a well-established proxy measure of LC structural integrity. A subset of participants underwent fasting blood draw to measure plasma AD biomarker concentrations (Aß42/40 ratio and ptau181). Multiple linear regression models examined associations between perfusion and LC integrity, with rostral LC-MRI contrast as predictor, regional CBF as outcome, and age and study as covariates. Moderation analyses included additional terms for plasma AD biomarker concentration and plasma x LC interaction.
Results:
Greater rostral LC-MRI contrast was linked to lower regional perfusion in limbic regions, such as the amygdala (ß = -0.25, p = 0.049) and entorhinal cortex (ß = -0.20, p = 0.042), but was linked to higher regional perfusion in frontal cortical regions, such as the lateral (ß = 0.28, p = 0.003) and medial (ß = 0.24, p = 0.05) orbitofrontal (OFC) cortices. Plasma amyloid levels moderated the relationship between rostral LC and amygdala CBF (Aß42/40 ratio x rostral LC interaction term ß = -0.31, p = 0.021), such that as plasma Aß42/40 ratio decreased (i.e., greater pathology), the strength of the negative relationship between rostral LC integrity and amygdala perfusion decreased. Plasma ptau181levels moderated the relationship between rostral LC and entorhinal CBF (ptau181 x rostral LC interaction term ß = 0.64, p = 0.001), such that as ptau181 increased (i.e., greater pathology), the strength of the negative relationship between rostral LC integrity and entorhinal perfusion decreased. For frontal cortical regions, ptau181 levels moderated the relationship between rostral LC and lateral OFC perfusion (ptau181 x rostral LC interaction term ß = -0.54, p = .004), as well as between rostral LC and medial OFC perfusion (ptau181 x rostral LC interaction term ß = -0.53, p = .005), such that as ptau181 increased (i.e., greater pathology), the strength of the positive relationship between rostral LC integrity and frontal perfusion decreased.
Conclusions:
LC integrity is linked to regional cortical perfusion in non-demented older adults, and these relationships are moderated by plasma AD biomarker concentrations. Variable directionality of the associations between the LC and frontal versus limbic perfusion, as well as the differential moderating effects of plasma AD biomarkers, may signify a compensatory mechanism and a shifting pattern of hyperemia in the presence of aggregating AD pathology. Linking LC integrity and cerebrovascular regulation may represent an important understudied pathway of dementia risk and may help to bridge competing theories of dementia progression in preclinical AD studies.
Individuals living with HIV may experience cognitive difficulties or marked declines known as HIV-Associated Neurocognitive Disorder (HAND). Cognitive difficulties have been associated with worse outcomes for people living with HIV, therefore, accurate cognitive screening and identification is critical. One potentially sensitive marker of cognitive impairment which has been underutilized, is intra-individual variability (IIV). Cognitive IIV is the dispersion of scores across tasks in neuropsychological assessment. In individuals living with HIV, greater cognitive IIV has been associated with cortical atrophy, poorer cognitive functioning, with more rapid declines, and greater difficulties in daily functioning. Studies examining the use of IIV in clinical neuropsychological testing are limited, and few have examined IIV in the context of a single neuropsychological battery designed for culturally diverse or at-risk populations. To address these gaps, this study aimed to examine IIV profiles of individuals living with HIV and who inject drugs, utilizing the Neuropsi, a standardized neuropsychological instrument for Spanish speaking populations.
Participants and Methods:
Spanish speaking adults residing in Puerto Rico (n=90) who are HIV positive and who inject drugs (HIV+I), HIV negative and who inject drugs (HIV-I), HIV positive who do not inject drugs (HIV+), or healthy controls (HC) completed the Neuropsi battery as part of a larger research protocol. The Neuropsi produces 3 index scores representing cognitive domains of memory, attention/memory, and attention/executive functioning. Total battery and within index IIV were calculated by dividing the standard deviation of T-scores by mean performance, resulting in a coefficient of variance (CoV). Group differences on overall test battery mean CoV (OTBMCoV) were investigated. To examine unique profiles of index specific IIV, a cluster analysis was performed for each group.
Results:
Results of a one-way ANOVA indicated significant between group differences on OTBMCoV (F[3,86]=6.54, p<.001). Post-hoc analyses revealed that HIV+I (M=.55, SE=.07, p=.003), HIV-I (M=.50, SE=.03, p=.001), and HIV+ (M=.48, SE=.02, p=.002) had greater OTBMCoV than the HC group (M=.30, SE=.02). To better understand sources of IIV within each group, cluster analysis of index specific IIV was conducted. For the HIV+ group, 3 distinct clusters were extracted: 1. High IIV in attention/memory and attention/executive functioning (n=3, 8%); 2. Elevated memory IIV (n=21, 52%); 3. Low IIV across all indices (n=16, 40%). For the HIV-I group, 2 distinct clusters were extracted: 1. High IIV across all 3 indices (n=7, 24%) and 2. Low IIV across all 3 indices (n=22, 76%). For the HC group, 3 distinct clusters were extracted: 1. Very low IIV across all 3 indices (n=5, 36%); 2. Elevated memory IIV (n=6, 43%); 3. Elevated attention/executive functioning IIV with very low attention/memory and memory IIV (n=3, 21%). Sample size of the HIV+I group was insufficient to extract clusters.
Conclusions:
Current findings support IIV in the Neuropsi test battery as clinically sensitive marker for cognitive impairment in Spanish speaking individuals living with HIV or who inject drugs. Furthermore, the distinct IIV cluster types identified between groups can help to better understand specific sources of variability. Implications for clinical assessment in prognosis and etiological considerations are discussed.
Executive functions (EFs) are considered to be both unitary and diverse functions with common conceptualizations consisting of inhibitory control, working memory, and cognitive flexibility. Current research indicates that these abilities develop along different timelines and that working memory and inhibitory control may be foundational for cognitive flexibility, or the ability to shift attention between tasks or operations. Very few interventions target cognitive flexibility despite its importance for academic or occupational tasks, social skills, problem-solving, and goal-directed behavior in general, and the ability is commonly impaired in individuals with neurodevelopmental disorders (NDDs) such as autism spectrum disorder, attention deficit hyperactivity disorder, and learning disorders. The current study investigated a tablet-based cognitive flexibility intervention, Dino Island (DI), that combines a game-based, process-specific intervention with compensatory metacognitive strategies as delivered by classroom aides within a school setting.
Participants and Methods:
20 children between ages 6-12 years (x̄ = 10.83 years) with NDDs and identified executive function deficits and their assigned classroom aides (i.e., “interventionists”) were randomly assigned to either DI or an educational game control condition. Interventionists completed a 2-4 hour online training course and a brief, remote Q&A session with the research team, which provided key information for delivering the intervention such as game-play and metacognitive/behavioral strategy instruction. Fidelity checks were conducted weekly. Interventionists were instructed to deliver 14-16 hours of intervention during the school day over 6-8 weeks, divided into 3-4 weekly sessions of 30-60 minutes each. Baseline and post-intervention assessments consisted of cognitive measures of cognitive flexibility (Minnesota Executive Function Scale), working memory (Weschler Intelligence Scales for Children, 4th Edn. Integrated Spatial Span) and parent-completed EF rating scales (Behavior Rating Inventory of Executive Function).
Results:
Samples sizes were smaller than expected due to COVID-19 related disruptions within schools, so nonparametric analyses were conducted to explore trends in the data. Results of the Mann-Whitney U test indicated that participants within the DI condition made greater gains in cognitive flexibility with a trend towards significance (p = 0.115. After dummy coding for positive change, results also indicated that gains in spatial working memory differed by condition (p = 0.127). Similarly, gains in task monitoring trended towards significant difference by condition.
Conclusions:
DI, a novel EF intervention, may be beneficial to cognitive flexibility, working memory, and monitoring skills within youth with EF deficits. Though there were many absences and upheavals within the participating schools related to COVID-19, it is promising to see differences in outcomes with such a small sample. This poster will expand upon the current results as well as future directions for the DI intervention.
Injection drug use is a significant public health crisis with adverse health outcomes, including increased risk of human immunodeficiency virus (HIV) infection. Comorbidity of HIV and injection drug use is highly prevalent in the United States and disproportionately elevated in surrounding territories such as Puerto Rico. While both HIV status and injection drug use are independently known to be associated with cognitive deficits, the interaction of these effects remains largely unknown. The aim of this study was to determine how HIV status and injection drug use are related to cognitive functioning in a group of Puerto Rican participants. Additionally, we investigated the degree to which type and frequency of substance use predict cognitive abilities.
Participants and Methods:
96 Puerto Rican adults completed the Neuropsi Attention and Memory-3rd Edition battery for Spanish-speaking participants. Injection substance use over the previous 12 months was also obtained via clinical interview. Participants were categorized into four groups based on HIV status and injection substance use in the last 30 days (HIV+/injector, HIV+/non-injector, HIV/injector, HIV-/non-injector). One-way analysis of variance (ANOVA) was conducted to determine differences between groups on each index of the Neuropsi battery (Attention and Executive Function; Memory; Attention and Memory). Multiple linear regression was used to determine whether type and frequency of substance use predicted performance on these indices while considering HIV status.
Results:
The one-way ANOVAs revealed significant differences (p’s < 0.01) between the healthy control group and all other groups across all indices. No significant differences were observed between the other groups. Injection drug use, regardless of the substance, was associated with lower combined attention and memory performance compared to those who inject less than monthly (Monthly: p = 0.04; 2-3x daily: p < 0.01; 4-7x daily: p = 0.02; 8+ times daily: p < 0.01). Both minimal and heavy daily use predicted poorer memory performance (p = 0.02 and p = 0.01, respectively). Heavy heroin use predicted poorer attention and executive functioning (p = 0.04). Heroin use also predicted lower performance on tests of memory when used monthly (p = 0.049), and daily or almost daily (2-6x weekly: p = 0.04; 4-7x daily: p = 0.04). Finally, moderate injection of heroin predicted lower scores on attention and memory (Weekly: p = 0.04; 2-6x weekly: p = 0.048). Heavy combined heroin and cocaine use predicted worse memory performance (p = 0.03) and combined attention and memory (p = 0.046). HIV status was not a moderating factor in any circumstance.
Conclusions:
As predicted, residents of Puerto Rico who do not inject substances and are HIVnegative performed better in domains of memory, attention, and executive function than those living with HIV and/or inject substances. There was no significant difference among the affected groups in cognitive ability. As expected, daily injection of substances predicted worse performance on tasks of memory. Heavy heroin use predicted worse performance on executive function and memory tasks, while heroin-only and combined heroin and cocaine use predicted worse memory performance. Overall, the type and frequency of substance is more predictive of cognitive functioning than HIV status.
The British Thyroid Association and American Thyroid Association guideline definitions for low-risk differentiated thyroid cancers are susceptible to differing interpretations, resulting in different clinical management in the UK.
Objective
To explore the national effect of these guidelines on the management of low-risk differentiated thyroid cancers.
Methods
Anonymised questionnaires were sent to multidisciplinary teams performing thyroidectomies in the UK. Risk factors that multidisciplinary teams considered important when managing low-risk differentiated thyroid cancers were established.
Results
Most surgeons (71 out of 75; 94.7 per cent) confirmed they were core multidisciplinary team members. More than 80 per cent of respondents performed at least 30 hemi- and/or total thyroidectomies per annum. A majority of multidisciplinary teams (50 out of 75; 66.7 per cent) followed British Thyroid Association guidelines. Risk factors considered important when managing low-risk differentiated thyroid cancers included: type of tumour histology findings (87.8 per cent), tumour size of greater than 4 cm (86.5 per cent), tumour stage T3b (85.1 per cent) and central neck node involvement (85.1 per cent). Extent of thyroid surgery (e.g. hemi- or total thyroidectomy) was highly variable for low-risk differentiated thyroid cancers.
Conclusion
Management of low-risk differentiated thyroid cancers is highly variable, leading to a heterogeneous patient experience.
General medical conditions (GMCs) often co-occur with mental and substance use disorders (MSDs).
Aims
To explore the contribution of GMCs to the burden of disease in people with MSDs, and investigate how this varied by age.
Method
A population-based cohort of 6 988 507 persons living in Denmark during 2000–2015 followed for up to 16 years. Danish health registers were used to identify people with MSDs and GMCs. For each MSD, years lived with disability and health loss proportion (HeLP) were estimated for comorbid MSDs and GMCs, using a multiplicative model for disability weights.
Results
Those with any MSD lost the equivalent of 43% of healthy life (HeLP = 0.43, 95% CI 0.40–0.44) after including information on GMCs, which was an increase from 25% before including GMCs (HeLP = 0.25, 95% CI 0.23–0.27). Schizophrenia was associated with the highest burden of disease (HeLP = 0.77, 95% CI 0.68–0.85). However, within each disorder, the relative contribution of MSDs and GMCs varied. For example, in those diagnosed with schizophrenia, MSDs and GMCs accounted for 86% and 14% of the total health loss; in contrast, in those with anxiety disorders, the same proportions were 59% and 41%. In general, HeLP increased with age, and was mainly associated with increasing rates of pulmonary, musculoskeletal and circulatory diseases.
Conclusions
In those with mental disorders, the relative contribution of comorbid GMCs to the non-fatal burden of disease increases with age. GMCs contribute substantially to the non-fatal burden of disease in those with MSDs.
OBJECTIVES/GOALS: #NAME? METHODS/STUDY POPULATION: Cell culture & protein identification: human T cells were purified from healthy blood, then activated & cultured for 5d. CAR-T cells were collected from infusion bags of cancer patients undergoing CAR-T. Silver staining of naive & activated healthy T-cell lysates was compared; B-II spectrin was upregulated and confirmed by Western blot. Migration assays: naive & activated T-cells were imaged during migration on ICAM-1 and ICAM-1 + CXCL12 coated plates. T-cells were transfected with BII-spectrin cDNA & the chemokine dependence of migration was compared with controls. In-vivo studies: in a melanoma mouse model, BII-spectrin transfected or control T-cells were injected; tumors were followed with serial imaging. Human patient records were examined to correlate endogenous BII-spectrin levels and CAR-T response. RESULTS/ANTICIPATED RESULTS: Activated T-cells downregulate the cytoskeletal protein B-II spectrin compared to naive cells, leading to chemokine-independent migration in in vitro assays and off-target trafficking when CAR-T cells are given in vivo. Restoration of B-II spectrin levels via transfection restores chemokine-dependence of activated T-cells. In a mouse melanoma model, control mice injected with standard activated T-cells showed fewer cells in the tumor site and more cells in the off-target organs (spleen, lungs) when compared to mice injected with B-II spectrin transfected cells. Furthermore, among 3 human patients undergoing CAR-T therapy, those with higher endogenous B-II spectrin levels experienced fewer side-effects, measured by the neurotoxicity and cytokine release syndrome grades. DISCUSSION/SIGNIFICANCE: A major hurdle to widespread CAR-T therapy for cancer is significant, often fatal side-effects. Our work shows that the protein B-II spectrin is downregulated during CAR-T production, and that restoring B-II spectrin levels decreases side-effects while increasing tumor clearance--hopefully translating to better CAR-T regimens for the future.
We present the data and initial results from the first pilot survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers $270 \,\mathrm{deg}^2$ of an area covered by the Dark Energy Survey, reaching a depth of 25–30 $\mu\mathrm{Jy\ beam}^{-1}$ rms at a spatial resolution of $\sim$11–18 arcsec, resulting in a catalogue of $\sim$220 000 sources, of which $\sim$180 000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here.
In view of the increasing complexity of both cardiovascular implantable electronic devices (CIEDs) and patients in the current era, practice guidelines, by necessity, have become increasingly specific. This document is an expert consensus statement that has been developed to update and further delineate indications and management of CIEDs in pediatric patients, defined as ≤21 years of age, and is intended to focus primarily on the indications for CIEDs in the setting of specific disease categories. The document also highlights variations between previously published adult and pediatric CIED recommendations and provides rationale for underlying important differences. The document addresses some of the deterrents to CIED access in low- and middle-income countries and strategies to circumvent them. The document sections were divided up and drafted by the writing committee members according to their expertise. The recommendations represent the consensus opinion of the entire writing committee, graded by class of recommendation and level of evidence. Several questions addressed in this document either do not lend themselves to clinical trials or are rare disease entities, and in these instances recommendations are based on consensus expert opinion. Furthermore, specific recommendations, even when supported by substantial data, do not replace the need for clinical judgment and patient-specific decision-making. The recommendations were opened for public comment to Pediatric and Congenital Electrophysiology Society (PACES) members and underwent external review by the scientific and clinical document committee of the Heart Rhythm Society (HRS), the science advisory and coordinating committee of the American Heart Association (AHA), the American College of Cardiology (ACC), and the Association for European Paediatric and Congenital Cardiology (AEPC). The document received endorsement by all the collaborators and the Asia Pacific Heart Rhythm Society (APHRS), the Indian Heart Rhythm Society (IHRS), and the Latin American Heart Rhythm Society (LAHRS). This document is expected to provide support for clinicians and patients to allow for appropriate CIED use, appropriate CIED management, and appropriate CIED follow-up in pediatric patients.
To understand how the different data collections methods of the Alberta Health Services Infection Prevention and Control Program (IPC) and the National Surgical Quality Improvement Program (NSQIP) are affecting reported rates of surgical site infections (SSIs) following total hip replacements (THRs) and total knee replacements (TKRs).
Design:
Retrospective cohort study.
Setting:
Four hospitals in Alberta, Canada.
Patients:
Those with THR or TKR surgeries between September 1, 2015, and March 31, 2018.
Methods:
Demographic information, complex SSIs reported by IPC and NSQIP were compared and then IPC and NSQIP data were matched with percent agreement and Cohen’s κ calculated. Statistical analysis was performed for age, gender and complex SSIs. A P value <.05 was considered significant.
Results:
In total, 7,549 IPC and 2,037 NSQIP patients were compared. The complex SSI rate for NSQIP was higher compared to IPC (THR: 1.19 vs 0.68 [P = .147]; TKR: 0.92 vs 0.80 [P = .682]). After matching, 7 SSIs were identified by both IPC and NSQIP; 3 were identified only by IPC, and 12 were identified only by NSQIP (positive agreement, 0.48; negative agreement, 1.0; κ = 0.48).
Conclusions:
Different approaches to monitor SSIs may lead to different results and trending patterns. NSQIP reports total SSI rates that are consistently higher than IPC. If systems are compared at any point in time, confidence on the data may be eroded. Stakeholders need to be aware of these variations and education provided to facilitate an understanding of differences and a consistent approach to SSI surveillance monitoring over time.
San Francisco (California USA) is a relatively compact city with a population of 884,000 and nine stroke centers within a 47 square mile area. Emergency Medical Services (EMS) transport distances and times are short and there are currently no Mobile Stroke Units (MSUs).
Methods:
This study evaluated EMS activation to computed tomography (CT [EMS-CT]) and EMS activation to thrombolysis (EMS-TPA) times for acute stroke in the first two years after implementation of an emergency department (ED) focused, direct EMS-to-CT protocol entitled “Mission Protocol” (MP) at a safety net hospital in San Francisco and compared performance to published reports from MSUs. The EMS times were abstracted from ambulance records. Geometric means were calculated for MP data and pooled means were similarly calculated from published MSU data.
Results:
From July 2017 through June 2019, a total of 423 patients with suspected stroke were evaluated under the MP, and 166 of these patients were either ultimately diagnosed with ischemic stroke or were treated as a stroke but later diagnosed as a stroke mimic. The EMS and treatment time data were available for 134 of these patients with 61 patients (45.5%) receiving thrombolysis, with mean EMS-CT and EMS-TPA times of 41 minutes (95% CI, 39-43) and 63 minutes (95% CI, 57-70), respectively. The pooled estimates for MSUs suggested a mean EMS-CT time of 35 minutes (95% CI, 27-45) and a mean EMS-TPA time of 48 minutes (95% CI, 39-60). The MSUs achieved faster EMS-CT and EMS-TPA times (P <.0001 for each).
Conclusions:
In a moderate-sized, urban setting with high population density, MP was able to achieve EMS activation to treatment times for stroke thrombolysis that were approximately 15 minutes slower than the published performance of MSUs.
Understanding the development of specific components of the neonatal immune system is critical to the understanding of the susceptibility of the neonate to specific pathogens [1]. With the increasing survival of extremely premature infants, neonatologists and other physicians caring for these newborns need to be aware of the vulnerability of this population. Furthermore, it is important for neonatologists to be able to differentiate between immune immaturity and the manifestations of a true primary immunodeficiency that present during the neonatal period. Failure to properly identify primary or acquired immunodeficiency diseases can result in delayed diagnosis and treatment, adversely affecting outcomes. This chapter will briefly define the immune immaturity of the neonate and a diagnostic approach for primary immune deficiency diseases that may present in the neonatal period.
Dopaminergic imaging is an established biomarker for dementia with Lewy bodies, but its diagnostic accuracy at the mild cognitive impairment (MCI) stage remains uncertain.
Aims
To provide robust prospective evidence of the diagnostic accuracy of dopaminergic imaging at the MCI stage to either support or refute its inclusion as a biomarker for the diagnosis of MCI with Lewy bodies.
Method
We conducted a prospective diagnostic accuracy study of baseline dopaminergic imaging with [123I]N-ω-fluoropropyl-2β-carbomethoxy-3β-(4-iodophenyl)nortropane single-photon emission computerised tomography (123I-FP-CIT SPECT) in 144 patients with MCI. Images were rated as normal or abnormal by a panel of experts with access to striatal binding ratio results. Follow-up consensus diagnosis based on the presence of core features of Lewy body disease was used as the reference standard.
Results
At latest assessment (mean 2 years) 61 patients had probable MCI with Lewy bodies, 26 possible MCI with Lewy bodies and 57 MCI due to Alzheimer's disease. The sensitivity of baseline FP-CIT visual rating for probable MCI with Lewy bodies was 66% (95% CI 52–77%), specificity 88% (76–95%) and accuracy 76% (68–84%), with positive likelihood ratio 5.3.
Conclusions
It is over five times as likely for an abnormal scan to be found in probable MCI with Lewy bodies than MCI due to Alzheimer's disease. Dopaminergic imaging appears to be useful at the MCI stage in cases where Lewy body disease is suspected clinically.
To conduct international comparisons of self-reports, collateral reports, and cross-informant agreement regarding older adult psychopathology.
Participants:
We compared self-ratings of problems (e.g. I cry a lot) and personal strengths (e.g. I like to help others) for 10,686 adults aged 60–102 years from 19 societies and collateral ratings for 7,065 of these adults from 12 societies.
Measurements:
Data were obtained via the Older Adult Self-Report (OASR) and the Older Adult Behavior Checklist (OABCL; Achenbach et al., 2004).
Results:
Cronbach’s alphas were .76 (OASR) and .80 (OABCL) averaged across societies. Across societies, 27 of the 30 problem items with the highest mean ratings and 28 of the 30 items with the lowest mean ratings were the same on the OASR and the OABCL. Q correlations between the means of the 0–1–2 ratings for the 113 problem items averaged across all pairs of societies yielded means of .77 (OASR) and .78 (OABCL). For the OASR and OABCL, respectively, analyses of variance (ANOVAs) yielded effect sizes (ESs) for society of 15% and 18% for Total Problems and 42% and 31% for Personal Strengths, respectively. For 5,584 cross-informant dyads in 12 societies, cross-informant correlations averaged across societies were .68 for Total Problems and .58 for Personal Strengths. Mixed-model ANOVAs yielded large effects for society on both Total Problems (ES = 17%) and Personal Strengths (ES = 36%).
Conclusions:
The OASR and OABCL are efficient, low-cost, easily administered mental health assessments that can be used internationally to screen for many problems and strengths.