We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Securing Democracies examines the attacks on voting processes and the broader informational environment in which elections take place. The volume's global cadre of scholars and practitioners highlight the interconnections among efforts to target vulnerable democratic systems and identify ways to prevent, defend against, and mitigate their effects on both the technical and the informational aspects of cybersecurity. The work takes a wider view of defending democracy by recognizing that both techniques—attacking infrastructure and using misinformation and disinformation—are means to undermine trust and confidence in democratic institutions. As such, the book proposes a wide range of policy responses to tackle these cyber-enabled threats focusing on the geopolitical front lines, namely Eastern Europe, the Middle East, and East Asia. This title is also available as open access on Cambridge Core.
Patients with posttraumatic stress disorder (PTSD) exhibit smaller regional brain volumes in commonly reported regions including the amygdala and hippocampus, regions associated with fear and memory processing. In the current study, we have conducted a voxel-based morphometry (VBM) meta-analysis using whole-brain statistical maps with neuroimaging data from the ENIGMA-PGC PTSD working group.
Methods
T1-weighted structural neuroimaging scans from 36 cohorts (PTSD n = 1309; controls n = 2198) were processed using a standardized VBM pipeline (ENIGMA-VBM tool). We meta-analyzed the resulting statistical maps for voxel-wise differences in gray matter (GM) and white matter (WM) volumes between PTSD patients and controls, performed subgroup analyses considering the trauma exposure of the controls, and examined associations between regional brain volumes and clinical variables including PTSD (CAPS-4/5, PCL-5) and depression severity (BDI-II, PHQ-9).
Results
PTSD patients exhibited smaller GM volumes across the frontal and temporal lobes, and cerebellum, with the most significant effect in the left cerebellum (Hedges’ g = 0.22, pcorrected = .001), and smaller cerebellar WM volume (peak Hedges’ g = 0.14, pcorrected = .008). We observed similar regional differences when comparing patients to trauma-exposed controls, suggesting these structural abnormalities may be specific to PTSD. Regression analyses revealed PTSD severity was negatively associated with GM volumes within the cerebellum (pcorrected = .003), while depression severity was negatively associated with GM volumes within the cerebellum and superior frontal gyrus in patients (pcorrected = .001).
Conclusions
PTSD patients exhibited widespread, regional differences in brain volumes where greater regional deficits appeared to reflect more severe symptoms. Our findings add to the growing literature implicating the cerebellum in PTSD psychopathology.
Medicare claims are frequently used to study Clostridioides difficile infection (CDI) epidemiology. However, they lack specimen collection and diagnosis dates to assign location of onset. Algorithms to classify CDI onset location using claims data have been published, but the degree of misclassification is unknown.
Methods:
We linked patients with laboratory-confirmed CDI reported to four Emerging Infections Program (EIP) sites from 2016–2021 to Medicare beneficiaries with fee-for-service Part A/B coverage. We calculated sensitivity of ICD-10-CM codes in claims within ±28 days of EIP specimen collection. CDI was categorized as hospital, long-term care facility, or community-onset using three different Medicare claims-based algorithms based on claim type, ICD-10-CM code position, duration of hospitalization, and ICD-10-CM diagnosis code presence-on-admission indicators. We assessed concordance of EIP case classifications, based on chart review and specimen collection date, with claims case classifications using Cohen’s kappa statistic.
Results:
Of 12,671 CDI cases eligible for linkage, 9,032 (71%) were linked to a single, unique Medicare beneficiary. Compared to EIP, sensitivity of CDI ICD-10-CM codes was 81%; codes were more likely to be present for hospitalized patients (93.0%) than those who were not (56.2%). Concordance between EIP and Medicare claims algorithms ranged from 68% to 75%, depending on the algorithm used (κ = 0.56–0.66).
Conclusion:
ICD-10-CM codes in Medicare claims data had high sensitivity compared to laboratory-confirmed CDI reported to EIP. Claims-based epidemiologic classification algorithms had moderate concordance with EIP classification of onset location. Misclassification of CDI onset location using Medicare algorithms may bias findings of claims-based CDI studies.
Product architecture decisions are made early in the product development process and have far-reaching effects. Unless anticipated through experience or intuition, many of these effects may not be apparent until much later in the development process, making changes to the architecture costly in time, effort and resources. Many researchers through the years have studied various elements of product architecture and their effects. By using a repeatable process for aggregating statements on the effects of architecture strategies from a selection of the literature on the topic and storing them in a systematic database, this information can then be recalled and presented in the form of a Product Architecture Strategy and Effect (PASE) matrix. PASE matrices allow for the identification, comparison, evaluation, and then selection of the most desirable product architecture strategies before expending resources along a specific development path. This paper introduces the PASE Database and matrix and describes their construction and use in guiding design decisions. This paper also provides metrics for understanding the robustness of this database.
SCN2A encodes a voltage-gated sodium channel (designated NaV1.2) vital for generating neuronal action potentials. Pathogenic SCN2A variants are associated with a diverse array of neurodevelopmental disorders featuring neonatal or infantile onset epilepsy, developmental delay, autism, intellectual disability and movement disorders. SCN2A is a high confidence risk gene for autism spectrum disorder and a commonly discovered cause of neonatal onset epilepsy. This remarkable clinical heterogeneity is mirrored by extensive allelic heterogeneity and complex genotype-phenotype relationships partially explained by divergent functional consequences of pathogenic variants. Emerging therapeutic strategies targeted to specific patterns of NaV1.2 dysfunction offer hope to improving the lives of individuals affected by SCN2A-related disorders. This Element provides a review of the clinical features, genetic basis, pathophysiology, pharmacology and treatment of these genetic conditions authored by leading experts in the field and accompanied by perspectives shared by affected families. This title is also available as Open Access on Cambridge Core.
Efficient evidence generation to assess the clinical and economic impact of medical therapies is critical amid rising healthcare costs and aging populations. However, drug development and clinical trials remain far too expensive and inefficient for all stakeholders. On October 25–26, 2023, the Duke Clinical Research Institute brought together leaders from academia, industry, government agencies, patient advocacy, and nonprofit organizations to explore how different entities and influencers in drug development and healthcare can realign incentive structures to efficiently accelerate evidence generation that addresses the highest public health needs. Prominent themes surfaced, including competing research priorities and incentives, inadequate representation of patient population in clinical trials, opportunities to better leverage existing technology and infrastructure in trial design, and a need for heightened transparency and accountability in research practices. The group determined that together these elements contribute to an inefficient and costly clinical research enterprise, amplifying disparities in population health and sustaining gaps in evidence that impede advancements in equitable healthcare delivery and outcomes. The goal of addressing the identified challenges is to ultimately make clinical trials faster, more inclusive, and more efficient across diverse communities and settings.
The Accelerating COVID-19 Therapeutic Interventions and Vaccines (ACTIV) Cross-Trial Statistics Group gathered lessons learned from statisticians responsible for the design and analysis of the 11 ACTIV therapeutic master protocols to inform contemporary trial design as well as preparation for a future pandemic. The ACTIV master protocols were designed to rapidly assess what treatments might save lives, keep people out of the hospital, and help them feel better faster. Study teams initially worked without knowledge of the natural history of disease and thus without key information for design decisions. Moreover, the science of platform trial design was in its infancy. Here, we discuss the statistical design choices made and the adaptations forced by the changing pandemic context. Lessons around critical aspects of trial design are summarized, and recommendations are made for the organization of master protocols in the future.
Rift propagation, rather than basal melt, drives the destabilization and disintegration of the Thwaites Eastern Ice Shelf. Since 2016, rifts have episodically advanced throughout the central ice-shelf area, with rapid propagation events occurring during austral spring. The ice shelf's speed has increased by ~70% during this period, transitioning from a rate of 1.65 m d−1 in 2019 to 2.85 m d−1 by early 2023 in the central area. The increase in longitudinal strain rates near the grounding zone has led to full-thickness rifts and melange-filled gaps since 2020. A recent sea-ice break out has accelerated retreat at the western calving front, effectively separating the ice shelf from what remained of its northwestern pinning point. Meanwhile, a distributed set of phase-sensitive radar measurements indicates that the basal melting rate is generally small, likely due to a widespread robust ocean stratification beneath the ice–ocean interface that suppresses basal melt despite the presence of substantial oceanic heat at depth. These observations in combination with damage modeling show that, while ocean forcing is responsible for triggering the current West Antarctic ice retreat, the Thwaites Eastern Ice Shelf is experiencing dynamic feedbacks over decadal timescales that are driving ice-shelf disintegration, now independent of basal melt.
Background: Medicare claims are frequently used to study Clostridioides difficile infection (CDI) epidemiology. Categorizing CDI based on location of onset and potential exposure is critical in understanding transmission patterns and prevention strategies. While claims data are well-suited for identifying prior healthcare utilization exposures, they lack specimen collection and diagnosis dates to assign likely location of onset. Algorithms to classify CDI onset and healthcare association using claims data have been published, but the degree of misclassification is unknown. Methods: We linked patients with laboratory-confirmed CDI reported to four Emerging Infections Program (EIP) sites from 2016-2020 to Medicare beneficiaries using residence, birth date, sex, and hospitalization and/or healthcare exposure dates. Uniquely linked patients with fee-for-service Medicare A/B coverage and complete EIP case report forms were included. Patients with a claims CDI diagnosis code within ±28 days of a positive CDI test reported to EIP were categorized as hospital-onset (HO), long-term care facility onset (LTCFO), or community-onset (CO, either healthcare facility-associated [COHCFA] or community-associated [CA]) using a previously published algorithm based on claim type, ICD-10-CM code position, and duration of hospitalization (if applicable). EIP classifies CDI into these categories using positive specimen collection date and other information from chart review (e.g. admit/discharge dates). We assessed concordance of EIP and claims case classifications using Cohen’s kappa. Results: Of 10,002 eligible EIP-identified CDI cases, 7,064 were linked to a unique beneficiary; 3,451 met Medicare A/B fee-for-service coverage inclusion criteria. Of these, 650 (19%) did not have a claims diagnosis code ±28 days of the EIP specimen collection date (Table); 48% (313/650) of those without a claims diagnosis code were categorized by EIP as CA CDI. Among those with a CDI diagnosis code, concurrence of claims-based and EIP CDI classification was 68% (κ=0.56). Concurrence was highest for HO and lowest for COHCFA CDI. A substantial number of EIP-classified CO CDIs (30%, Figure) were misclassified as HO using the claims-based algorithm; half of these had a primary ICD-10 diagnosis code of sepsis (226/454; 50%). Conclusions: Evidence of CDI in claims data was found for 81% of EIP-reported CDI cases. Medicare classification algorithms concurred with the EIP classification in 68% of cases. Discordance was most common for community-onset CDI patients, many of whom were hospitalized with a primary diagnosis of sepsis. Misclassification of CO-CDI as HO may bias findings of claims-based CDI studies.
Background: Nursing home (NH) residents are at high risk of COVID-19 from exposure to infected staff and other residents. Understanding SARS-CoV-2 viral RNA kinetics in residents and staff can guide testing, isolation, and return to work recommendations. We sought to determine the duration of antigen test and polymerase chain reaction (PCR) positivity in a cohort of NH residents and staff. Methods: We prospectively collected data on SARS-CoV-2 viral kinetics from April 2023 through November 2023. Staff and residents could enroll prospectively or upon a positive test (identified through routine clinical testing, screening, or outbreak response testing). Participating facilities performed routine clinical testing; asymptomatic testing of contacts was performed within 48 hours if an outbreak or known exposure occurred and upon (re-) admission. Enrolled participants who tested positive for SARS-CoV-2 were re-tested daily for 14 days with both nasal antigen and nasal PCR tests. All PCR tests were run by a central lab with the same assay. We conducted a Kaplan-Meier survival analysis on time to first negative test restricted to participants who initially tested positive (day zero) and had at least one test ≥10 days after initially testing positive with the same test type; a participant could contribute to both antigen and PCR survival curves. We compared survival curves for staff and residents using the log-rank test. Results: Twenty-four nursing homes in eight states participated; 587 participants (275 residents, 312 staff) enrolled in the evaluation, participants were only tested through routine clinical or outbreak response testing. Seventy-two participants tested positive for antigen; of these, 63 tested PCR-positive. Residents were antigen- and PCR-positive longer than staff (Figure 1), but this finding is only statistically significant (p=0.006) for duration of PCR positivity. Five days after the first positive test, 56% of 50 residents and 59% of 22 staff remained antigen-positive; 91% of 44 residents and 79% of 19 staff were PCR-positive. Ten days after the first positive test, 22% of 50 residents and 5% of 22 staff remained antigen-positive; 61% of 44 residents and 21% of 19 staff remained PCR-positive. Conclusions: Most NH residents and staff with SARS-CoV-2 remained antigen- or PCR-positive 5 days after the initial positive test; however, differences between staff and resident test positivity were noted at 10 days. These data can inform recommendations for testing, duration of NH resident isolation, and return to work guidance for staff. Additional viral culture data may strengthen these conclusions.
Disclosure: Stefan Gravenstein: Received consulting and speaker fees from most vaccine manufacturers (Sanofi, Seqirus, Moderna, Merck, Janssen, Pfizer, Novavax, GSK, and have or expect to receive grant funding from several (Sanofi, Seqirus, Moderna, Pfizer, GSK). Lona Mody: NIH, VA, CDC, Kahn Foundation; Honoraria: UpToDate; Contracted Research: Nano-Vibronix
We study self-similar viscous fingering for the case of divergent flow within a wedge-shaped Hele-Shaw cell. Previous authors have conjectured the existence of a countably infinite number of selected solutions, each distinguished by a different value of the relative finger angle. Interestingly, the associated solution branches have been posited to merge and disappear in pairs as the surface tension decreases. For the first time, we demonstrate how the selection mechanism can be derived based on exponential asymptotics. Asymptotic predictions of the finger-to-wedge angle are additionally given for different sized wedges and surface-tension values. The merging of solution branches is explained; this feature is qualitatively different to the case of classic Saffman–Taylor viscous fingering in a parallel channel configuration. Moreover, because the asymptotic framework does not highly depend on specifics of the wedge geometry, the proposed theory for branch merging in our self-similar problem likely relates much more widely to tip-splitting instabilities in time-dependent flows in circular and other geometries, where the viscous fingers destabilise and divide in two.
Knowledge of sex differences in risk factors for posttraumatic stress disorder (PTSD) can contribute to the development of refined preventive interventions. Therefore, the aim of this study was to examine if women and men differ in their vulnerability to risk factors for PTSD.
Methods
As part of the longitudinal AURORA study, 2924 patients seeking emergency department (ED) treatment in the acute aftermath of trauma provided self-report assessments of pre- peri- and post-traumatic risk factors, as well as 3-month PTSD severity. We systematically examined sex-dependent effects of 16 risk factors that have previously been hypothesized to show different associations with PTSD severity in women and men.
Results
Women reported higher PTSD severity at 3-months post-trauma. Z-score comparisons indicated that for five of the 16 examined risk factors the association with 3-month PTSD severity was stronger in men than in women. In multivariable models, interaction effects with sex were observed for pre-traumatic anxiety symptoms, and acute dissociative symptoms; both showed stronger associations with PTSD in men than in women. Subgroup analyses suggested trauma type-conditional effects.
Conclusions
Our findings indicate mechanisms to which men might be particularly vulnerable, demonstrating that known PTSD risk factors might behave differently in women and men. Analyses did not identify any risk factors to which women were more vulnerable than men, pointing toward further mechanisms to explain women's higher PTSD risk. Our study illustrates the need for a more systematic examination of sex differences in contributors to PTSD severity after trauma, which may inform refined preventive interventions.
Understanding characteristics of healthcare personnel (HCP) with SARS-CoV-2 infection supports the development and prioritization of interventions to protect this important workforce. We report detailed characteristics of HCP who tested positive for SARS-CoV-2 from April 20, 2020 through December 31, 2021.
Methods:
CDC collaborated with Emerging Infections Program sites in 10 states to interview HCP with SARS-CoV-2 infection (case-HCP) about their demographics, underlying medical conditions, healthcare roles, exposures, personal protective equipment (PPE) use, and COVID-19 vaccination status. We grouped case-HCP by healthcare role. To describe residential social vulnerability, we merged geocoded HCP residential addresses with CDC/ATSDR Social Vulnerability Index (SVI) values at the census tract level. We defined highest and lowest SVI quartiles as high and low social vulnerability, respectively.
Results:
Our analysis included 7,531 case-HCP. Most case-HCP with roles as certified nursing assistant (CNA) (444, 61.3%), medical assistant (252, 65.3%), or home healthcare worker (HHW) (225, 59.5%) reported their race and ethnicity as either non-Hispanic Black or Hispanic. More than one third of HHWs (166, 45.2%), CNAs (283, 41.7%), and medical assistants (138, 37.9%) reported a residential address in the high social vulnerability category. The proportion of case-HCP who reported using recommended PPE at all times when caring for patients with COVID-19 was lowest among HHWs compared with other roles.
Conclusions:
To mitigate SARS-CoV-2 infection risk in healthcare settings, infection prevention, and control interventions should be specific to HCP roles and educational backgrounds. Additional interventions are needed to address high social vulnerability among HHWs, CNAs, and medical assistants.
Patients tested for Clostridioides difficile infection (CDI) using a 2-step algorithm with a nucleic acid amplification test (NAAT) followed by toxin assay are not reported to the National Healthcare Safety Network as a laboratory-identified CDI event if they are NAAT positive (+)/toxin negative (−). We compared NAAT+/toxin− and NAAT+/toxin+ patients and identified factors associated with CDI treatment among NAAT+/toxin− patients.
Design:
Retrospective observational study.
Setting:
The study was conducted across 36 laboratories at 5 Emerging Infections Program sites.
Patients:
We defined a CDI case as a positive test detected by this 2-step algorithm during 2018–2020 in a patient aged ≥1 year with no positive test in the previous 8 weeks.
Methods:
We used multivariable logistic regression to compare CDI-related complications and recurrence between NAAT+/toxin− and NAAT+/toxin+ cases. We used a mixed-effects logistic model to identify factors associated with treatment in NAAT+/toxin− cases.
Results:
Of 1,801 cases, 1,252 were NAAT+/toxin−, and 549 were NAAT+/toxin+. CDI treatment was given to 866 (71.5%) of 1,212 NAAT+/toxin− cases versus 510 (95.9%) of 532 NAAT+/toxin+ cases (P < .0001). NAAT+/toxin− status was protective for recurrence (adjusted odds ratio [aOR], 0.65; 95% CI, 0.55–0.77) but not CDI-related complications (aOR, 1.05; 95% CI, 0.87–1.28). Among NAAT+/toxin− cases, white blood cell count ≥15,000/µL (aOR, 1.87; 95% CI, 1.28–2.74), ≥3 unformed stools for ≥1 day (aOR, 1.90; 95% CI, 1.40–2.59), and diagnosis by a laboratory that provided no or neutral interpretive comments (aOR, 3.23; 95% CI, 2.23–4.68) were predictors of CDI treatment.
Conclusion:
Use of this 2-step algorithm likely results in underreporting of some NAAT+/toxin− cases with clinically relevant CDI. Disease severity and laboratory interpretive comments influence treatment decisions for NAAT+/toxin− cases.
One montmorillonite, STx-1 (Texas, USA), was activated with different amounts of Al and tetramethylammonium (TMA+) cations to obtain materials with a combined Al3+ and TMA+ content equal to its cation exchange capacity. The adsorption capacity of these samples was studied saturating them with hept-1-ene at room temperature. The samples were heated and the evolved gases analyzed by Fourier transform infrared spectroscopy and gas chromatography-mass spectrometry. Hept-1-ene reacted with the clays via proton transfer and resulted in the formation of a variety of reaction products (>60 hydrocarbons). In general, the presence of TMA+ cations significantly reduced the population of protons to selectively produce isomerization and hydration products.
Cognitive decline is expected in normative aging (Cabeza et al., 2018; Salthouse, 2019), which can lead to impairments in adaptive functioning (Yam et al., 2014). Several cognitive domains have been associated with adaptive functioning in older adult samples, including processing speed and executive functioning (e.g., Nguyen et al., 2019; Vaughn & Giovanello, 2010). A recent study examining a mixed clinical sample of older adults demonstrated that processing speed was more predictive of functional decline than other cognitive domains, including aspects of executive functioning (Roye et al., 2022). Therefore, this study attempts to build on previous findings by further examining the relationships between processing speed, adaptive functioning, and executive functioning. Specifically, it investigated the extent to which processing speed mediated the associations between executive functioning and adaptive functioning.
Participants and Methods:
Participants (N = 239) were selected from a clinical database of neuropsychological evaluations. Inclusion criteria were age 60+ (M = 74.0, SD = 6.9) and completion of relevant study measures. Participants were majority White (93%) women (53.1%). Three cognitive diagnosis groups were coded: No Diagnosis (N = 82), Mild Neurocognitive Disorder (NCD; N = 78), and Major NCD (N = 79). The Texas Functional Living Scale (TFLS) was used as a performance-based measure of adaptive functioning. Processing speed was measured using the Coding subtest from the Repeatable Battery for the Assessment of Neuropsychological Status. Executive functioning performance was quantified using part B of the Trail Making Test, Controlled Oral Word Association Test, and Similarities and Matrix Reasoning subtests from the WAIS-IV and WASI-II. Mediation models included age and years of education as covariates and indirect effects were assessed with bootstrapped confidence intervals (Hayes, 2020).
Results:
Processing speed mediated all measures of executive functioning. The pattern was consistent for all executive functioning measures such that poorer executive functioning was associated with poorer processing speed, which was subsequently associated with poorer adaptive functioning. Direct effects were significant for all models (ps < .03), suggesting that executive functioning maintained unique associations with adaptive functioning. Follow-up analyses indicated no evidence for moderation of the mediation models based on diagnostic group.
Conclusions:
These results highlight the importance of processing speed in understanding real-world implications of pathological and non-pathological cognitive aging. Processing speed mediated all relationships between executive functioning and adaptive functioning. There was no evidence for moderation of these effects, supporting generalizability regardless of neurocognitive disorder and etiologic subtype. Further investigation is warranted into the importance of processing speed in explaining associations of other cognitive domains with adaptive functioning.
The presence of cognitive impairment corresponds with declines in adaptive functioning (Cahn-Weiner, Ready, & Malloy, 2003). Although memory loss is often highlighted as a key deficit in neurodegenerative diseases (Arvanitakis et al., 2018), research indicates that processing speed may be equally important when predicting functional outcomes in atypical cognitive decline (Roye et al., 2022). Additionally, the development of performance-based measures of adaptive functioning offers a quantifiable depiction of functional deficits within a clinical setting. This study investigated the degree to which processing speed explains the relationship between immediate/delayed memory and adaptive functioning in patients diagnosed with mild and major neurocognitive disorders using an objective measure of adaptive functioning.
Participants and Methods:
Participants (N = 115) were selected from a clinical database of neuropsychological evaluations. Included participants were ages 65+ (M = 74.7, SD = 5.15), completed all relevant study measures, and were diagnosed with Mild Neurocognitive Disorder (NCD; N = 69) or Major NCD (N = 46). They were majority white (87.8%) women (53.0%). The Texas Functional Living Scale was used as a performance-based measure of adaptive functioning. The Coding subtest from the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS-CD) was used to measure information processing speed. Composite memory measures for Immediate Recall and Delayed Recall were created from subtests of the RBANS (List Learning, Story Memory, and Figure Recall) and the Wechsler Memory Scale-IV (Logical Memory and Visual Reproduction). Multiple regressions were conducted to evaluate the importance of memory and information processing speed in understanding adaptive functioning. Age and years of education were added as covariates in regression analyses.
Results:
Significant correlations (p < .001) were found between adaptive functioning and processing speed (PS; r = .52), immediate memory (IM; r = .43), and delayed memory (DM; r = .32). In a regression model with IM and DM predicting daily functioning, only IM significantly explained daily functioning (rsp = .24, p = .009). A multiple regression revealed daily functioning was significantly and uniquely associated with IM (rsp = .28, p < .001) and PS (rsp = .41, p < .001). This was qualified by a significant interaction effect (rsp = -.29, p = .001), revealing that IM was only associated with adaptive functioning at PS scores lower than the RBANS normative 20th percentile.
Conclusions:
Results suggest that processing speed may be a more sensitive predictor of functional decline than memory among older adults with cognitive disorders. These findings support further investigation into the clinical utility of processing speed tests for predicting functional decline in older adults.