We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A nonparametric test of dispersion with paired replicates data is described which involves jackknifing logarithmic transformations of the ratio of variance estimates for the pre- and post-treatment populations. Results from a Monte Carlo simulation show that the test performs well under Ho and has good power properties. Examples are given of applying the procedure on psychiatric data.
How was trust created and reinforced between the inhabitants of medieval and early modern cities? And how did the social foundations of trusting relationships change over time? Current research highlights the role of kinship, neighbourhood, and associations, particularly guilds, in creating ‘relationships of trust’ and social capital in the face of high levels of migration, mortality, and economic volatility, but tells us little about their relative importance or how they developed. We uncover a profound shift in the contribution of family and guilds to trust networks among the middling and elite of one of Europe's major cities, London, over three centuries, from the 1330s to the 1680s. We examine almost 15,000 networks of sureties created to secure orphans’ inheritances to measure the presence of trusting relationships connected by guild membership, family, and place. We uncover a profound increase in the role of kinship – a re-embedding of trust within the family – and a decline of the importance of shared guild membership in connecting Londoners who secured orphans’ inheritances together. These developments indicate a profound transformation in the social fabric of urban society.
Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) is a hereditary form of cerebral small vessel disease leading to early cerebrovascular changes. These changes result from mutations in the NOTCH3 gene that cause progressive accumulations of granular osmiophilic material (GOM) deposits, thickening arterial walls and reducing or restricting blood flow in the brain. The clinical presentation of CADASIL is characterized by migraines with aura, early and recurrent strokes, progressive cognitive impairment, and psychiatric disturbances. CADASIL is rare but frequently underrecognized or misdiagnosed. A genetic condition with a 50% risk of inheritance from an affected parent, the gold standard for diagnosis is genetic testing to determine the presence of mutations in the NOTCH3 gene. This presentation aims to familiarize neuropsychologists with the condition of CADASIL through a unique case study highlighting important psychological, social, and ethical considerations raised by genetic testing.
Participants and Methods:
This case study presents a 67-year-old, right-handed, married female diagnosed with CADASIL who was referred for neuropsychological evaluation of cognitive function and low mood concerns following multiple ischemic events.
Results:
Results revealed severe cognitive deficits in domains of attention, learning, and memory. Her superior verbal abilities and executive function remained largely intact. Assessment of mood revealed elevations in symptoms of depression and anxiety. The patient was aware of CADASIL in her father, paternal aunt, and younger brother, but elected to forego any genetic testing to confirm whether she had the condition until she experienced a stroke at age 61. She has two adult children who have also elected to forego testing and currently remain asymptomatic. Cognitive profile, mood disturbances, and patient perspectives on refraining from pre-symptomatic genetic testing for CADASIL diagnosis will be discussed.
Conclusions:
Aspects of this case are consistent with a small body of literature evidencing distinct psychological, emotional, and social challenges among families carrying genetic risk of CADASIL. While providing an example of an often underrecognized neurological disorder with which neuropsychologists should be familiar, this case uniquely raises ethical questions relevant to care providers and current treatment guidelines regarding genetic testing among families carrying highly heritable neurological conditions. In particular, personal ethical challenges around deciding to pursue or forego pre-symptomatic testing, and implications for family planning, highlight the importance of genetic counseling for affected families.
The Personality Assessment Inventory (PAI; Morey, 1991; 2007) is a 344 item self-report measure of personality, psychopathology, and factors affecting treatment. The PAI short form (PAI-SF) contains the first 160 items of the PAI and is often favoured as a screening tool or brief version to mitigate respondent burden and fatigue. The PAI has been psychometrically validated among numerous populations (Slavin-Mulford et al., 2012), while psychometric research on the PAI-SF is gradually emerging. The psychometric properties of the PAI-SF range from adequate to strong in psychiatric (Sinclair et al., 2009), forensic (Sinclair et al., 2010), outpatient and nonclinical (Ward et al., 2018), and stroke (Udala et al., 2020) samples. To advance research validating the PAI-SF among diverse populations, this project investigated the psychometric comparability between the PAI and the PAI-SF in a neuropsychiatric population. Based on previous literature, it was hypothesized that the PAI-SF would produce congruent results to the PAI in this sample.
Participants and Methods:
For this study, participant files (N=214) were collected retrospectively from short- and long-term residential psychiatric and substance use treatment facilities in Minnesota for patients with neurological and cognitive concerns referred for neuropsychological evaluation. The PAI-SF was scored using the first 160 items from a patient’s long-form PAI protocol. To determine the psychometric comparability of long- and short-forms, paired-samples t-tests, intraclass correlations, and percent agreement in clinical classification between forms were analyzed.
Results:
Analyses of participant data found that intra-class correlations ranged from .87 to .98 for each subscale on the PAI when compared to the PAI-SF, demonstrating good to excellent reliability between forms. Symptoms are considered clinically elevated when they exceed the clinical significance threshold for a subscale (typically a T-score of 70+). Agreement between the PAI and PAI-SF subscales in the classification of clinically elevated scores ranged from 86% to 100%. When forms did not agree, the PAI-SF was more likely to be clinically significant relative to the PAI. A comparison of subscale means between forms was examined by independent samples T-tests with a Bonferroni correction. Results revealed significant differences between the PAI and PAI-SF on one validity scale (Negative Impression Management), three clinical scales (Anxiety; Depression; Antisocial Features), and one treatment scale (Treatment Rejection).
Conclusions:
Results demonstrated that the PAI and PAI-SF have high reliability between forms in a neuropsychiatric population. Although mean scores differed on a small number of subscales between the PAI and PAI-SF, differences did not appear sufficiently large enough to shift clinical classifications, as the two forms performed similarly in their identification of clinically elevated scales. Findings align with previous literature and suggest that the PAI-SF may perform adequately in a neuropsychiatric population if brevity or participant burden is of concern. However, caution is warranted when making clinical decisions with the PAI-SF as more research is needed.
Childhood adversities (CAs) predict heightened risks of posttraumatic stress disorder (PTSD) and major depressive episode (MDE) among people exposed to adult traumatic events. Identifying which CAs put individuals at greatest risk for these adverse posttraumatic neuropsychiatric sequelae (APNS) is important for targeting prevention interventions.
Methods
Data came from n = 999 patients ages 18–75 presenting to 29 U.S. emergency departments after a motor vehicle collision (MVC) and followed for 3 months, the amount of time traditionally used to define chronic PTSD, in the Advancing Understanding of Recovery After Trauma (AURORA) study. Six CA types were self-reported at baseline: physical abuse, sexual abuse, emotional abuse, physical neglect, emotional neglect and bullying. Both dichotomous measures of ever experiencing each CA type and numeric measures of exposure frequency were included in the analysis. Risk ratios (RRs) of these CA measures as well as complex interactions among these measures were examined as predictors of APNS 3 months post-MVC. APNS was defined as meeting self-reported criteria for either PTSD based on the PTSD Checklist for DSM-5 and/or MDE based on the PROMIS Depression Short-Form 8b. We controlled for pre-MVC lifetime histories of PTSD and MDE. We also examined mediating effects through peritraumatic symptoms assessed in the emergency department and PTSD and MDE assessed in 2-week and 8-week follow-up surveys. Analyses were carried out with robust Poisson regression models.
Results
Most participants (90.9%) reported at least rarely having experienced some CA. Ever experiencing each CA other than emotional neglect was univariably associated with 3-month APNS (RRs = 1.31–1.60). Each CA frequency was also univariably associated with 3-month APNS (RRs = 1.65–2.45). In multivariable models, joint associations of CAs with 3-month APNS were additive, with frequency of emotional abuse (RR = 2.03; 95% CI = 1.43–2.87) and bullying (RR = 1.44; 95% CI = 0.99–2.10) being the strongest predictors. Control variable analyses found that these associations were largely explained by pre-MVC histories of PTSD and MDE.
Conclusions
Although individuals who experience frequent emotional abuse and bullying in childhood have a heightened risk of experiencing APNS after an adult MVC, these associations are largely mediated by prior histories of PTSD and MDE.
This study sought to identify coronavirus disease 2019 (COVID-19) risk communication materials distributed in Jamaica to mitigate the effects of the disease outbreak. It also sought to explore the effects of health risk communication on vulnerable groups in the context of the pandemic.
Methods:
A qualitative study was conducted, including a content analysis of health risk communications and in-depth interviews with 35 purposively selected elderly, physically disabled, persons with mental health disorders, representatives of government agencies, advocacy and service groups, and caregivers of the vulnerable. Axial coding was applied to data from the interviews, and all data were analyzed using the constant comparison technique.
Results:
Twelve of the 141 COVID-19 risk communication messages directly targeted the vulnerable. All participants were aware of the relevant risk communication and largely complied. Barriers to messaging awareness and compliance included inappropriate message medium for the deaf and blind, rural location, lack of Internet service or digital devices, limited technology skills, and limited connection to agencies that serve the vulnerable.
Conclusion:
The vulnerable are at increased risk in times of crisis. Accessibility of targeted information was inadequate for universal access to health information and support for vulnerable persons regardless of location and vulnerability.
Background: As carbapenem-resistant Enterobacteriaceae (CRE) prevalence increases in the United States, the risk of cocolonization with multiple CRE may also be increasing, with unknown clinical and epidemiological significance. In this study, we aimed to describe the epidemiologic and microbiologic characteristics of inpatients cocolonized with multiple CRE. Methods: We conducted a secondary analysis of a large, multicenter prospective cohort study evaluating risk factors for CRE transmission to healthcare personnel gown and gloves. Patients were identified between January 2016 and June 2019 from 4 states. Patients enrolled in the study had a clinical or surveillance culture positive for CRE within 7 days of enrollment. We collected and cultured samples from the following sites from each CRE-colonized patient: stool, perianal area, and skin. A modified carbapenem inactivation method (mCIM) was used to detect the presence or absence of carbapenemase(s). EDTA-modified CIM (eCIM) was used to differentiate between serine and metal-dependent carbapenemases. Results: Of the 313 CRE-colonized patients enrolled in the study, 28 (8.9%) were cocolonized with at least 2 different CRE. Additionally, 3 patients were cocolonized with >2 different CRE (1.0%). Of the 28 patients, 19 (67.6%) were enrolled with positive clinical cultures. Table 1 summarizes the demographic and clinical characteristics of these patients. The most frequently used antibiotic prior to positive culture was vancomycin (n = 33, 18.3%). Among the 62 isolates from 59 samples from 28 patients cocolonized patients, the most common CRE species were Klebsiella pneumoniae (n = 18, 29.0%), Escherichia coli (n = 10, 16.1%), and Enterobacter cloacae (n = 9, 14.5%). Of the 62 isolates, 38 (61.3%) were mCIM positive and 8 (12.9%) were eCIM positive. Of the 38 mCIM-positive isolates, 33 (86.8%) were KPC positive, 4 (10.5%) were NDM positive, and 1 (2.6%) was negative for both KPC and NDM. Also, 2 E. coli, 1 K. pneumoniae, and 1 E. cloacae were NDM-producing CRE. Conclusion: Cocolonization with multiple CRE occurs frequently in the acute-care setting. Characterizing patients with CRE cocolonization may be important to informing infection control practices and interventions to limit the spread of these organisms, but further study is needed.
Background: Carbapenem-resistant Enterobacteriaceae (CRE) are a serious threat to public health due to high associated morbidity and mortality. Healthcare personnel (HCP) gloves and gowns are frequently contaminated with antibiotic-resistant bacteria, including CRE. We aimed to identify patients more likely to transmit CRE to HCP gloves or gowns and HCP types and interactions more likely to lead to glove or gown contamination. Methods:Between January 2016 and August 2018, patients with a clinical or surveillance culture positive for CRE in the preceding 7 days were enrolled at 5 hospitals in California, Maryland, New York, and Pennsylvania. Ten HCP–patient interactions were observed for each patient and were recorded by research staff. Following patient care, but prior to doffing, the gloves and gown of each HCP were sampled for the presence of CRE. Results: We enrolled 313 CRE-colonized patients, and we observed 3,070 HCP interactions. CRE was transmitted to HCP gloves in 242 of 3,070 observations (7.9%) and to gowns in 132 of 3,070 observations (4.3%). Transmission to either gloves or gown occurred in 308 of 3,070 interactions observed (10%). The most frequently identified organism was Klebsiella pneumoniae (n = 171, 53.2%), followed by Enterobacter cloacae (n = 36, 11.2%), and Escherichia coli (n = 33, 10.3%). Patients in the intensive care unit (n = 177, 56.5%) were more likely to transmit CRE to HCP gloves or gown (OR, 1.65; 95% CI, 1.03–2.64) compared to those not in an ICU and adjusted for HCP type. The odds of CRE transmission increased with the number of different items touched near the patient (OR, 1.32; 95% CI, 1.21–1.44) and with the number of different items touched in the environment (OR, 1.13; 95% CI, 1.06–1.21). Respiratory therapists had the highest rates of transmission to gloves and gown (OR, 3.79; 95% CI, 1.61–8.94), followed by physical therapists and occupational therapists (OR, 2.82; 95% CI, 1.01–8.32) when compared to HCP in the “other” category. Manipulating the rectal tube (OR, 3.03; 95% CI, 1.53–6.04), providing wound care (OR, 2.81; 95% CI, 1.73–4.59), and touching the endotracheal tube (OR, 2.79; 95% CI, 1.86–4.19) were the interactions most strongly associated with CRE transmission compared to not touching these items and adjusted for HCP type. Conclusions: Transmission of CRE to HCP gloves and gowns occurs frequently. We identified interactions and HCP types that were particularly high risk for transmission. Infection control programs may wish to target infection prevention resources and education toward these high-risk professions and interactions.
Funding: This work was supported by the CDC Prevention Epicenter Program (U43CK000450-01) and the NIH National Institute of Allergy and Infectious Diseases (R01 AI121146-01).
Background: Estimates of contamination of healthcare personnel (HCP) gloves and gowns with methicillin-resistant Staphylococcus aureus (MRSA) following interactions with colonized or infected patients range from 17% to 20%. Most studies were conducted in the intensive care unit (ICU) setting where patients had a recent positive clinical culture. The aim of this study was to determine the rate of MRSA transmission to HCP gloves and gown in non-ICU acute-care hospital units and to identify associated risk factors. Methods: Patients on contact precautions with history of MRSA colonization or infection admitted to non-ICU settings were randomly selected from electronic health records. We observed patient care activities and cultured the gloves and gowns of 10 HCP interactions per patient prior to doffing. Cultures from patients’ anterior nares, chest, antecubital fossa and perianal area were collected to quantify bacterial bioburden. Bacterial counts were log transformed. Results: We observed 55 patients (Fig. 1), and 517 HCP–patient interactions. Of the HCP–patient interactions, 16 (3.1%) led to MRSA contamination of HCP gloves, 18 (3.5%) led to contamination of HCP gown, and 28 (5.4%) led to contamination of either gloves or gown. In addition, 5 (12.8%) patients had a positive clinical or surveillance culture for MRSA in the prior 7 days. Nurses, physicians and technicians were grouped in “direct patient care”, and rest of the HCPs were included in “no direct care group.” Of 404 interactions, 26 (6.4%) of providers in the “direct patient care” group showed transmission of MRSA to gloves or gown in comparison to 2 of 113 (1.8%) interactions involving providers in the “no direct patient care” group (P = .05) (Fig. 2). The median MRSA bioburden was 0 log 10CFU/mL in the nares (range, 0–3.6), perianal region (range, 0–3.5), the arm skin (range, 0-0.3), and the chest skin (range, 0–6.2). Detectable bioburden on patients was negatively correlated with the time since placed on contact precautions (rs= −0.06; P < .001). Of 97 observations with detectable bacterial bioburden at any site, 9 (9.3%) resulted in transmission of MRSA to HCP in comparison to 11 (3.6%) of 310 observations with no detectable bioburden at all sites (P = .03). Conclusions: Transmission of MRSA to gloves or gowns of HCP caring for patients on contact precautions for MRSA in non-ICU settings was lower than in the ICU setting. More evidence is needed to help guide the optimal use of contact precautions for the right patient, in the right setting, for the right type of encounter.
Aberrant activity of the subcallosal cingulate (SCC) is a common theme across pharmacologic treatment efficacy prediction studies. The functioning of the SCC in psychotherapeutic interventions is relatively understudied, as are functional differences among SCC subdivisions. We conducted functional connectivity analyses (rsFC) on resting-state functional magnetic resonance imaging (fMRI) data, collected before and after a course of cognitive behavioral therapy (CBT) in patients with major depressive disorder (MDD), using seeds from three SCC subdivisions.
Methods.
Resting-state data were collected from unmedicated patients with current MDD (Hamilton Depression Rating Scale-17 > 16) before and after 14-sessions of CBT monotherapy. Treatment outcome was assessed using the Beck Depression Inventory (BDI). Rostral anterior cingulate (rACC), anterior subcallosal cingulate (aSCC), and Brodmann’s area 25 (BA25) masks were used as seeds in connectivity analyses that assessed baseline rsFC and symptom severity, changes in connectivity related to symptom improvement after CBT, and prediction of treatment outcomes using whole-brain baseline connectivity.
Results.
Pretreatment BDI negatively correlated with pretreatment rACC ~ dorsolateral prefrontal cortex and aSCC ~ lateral prefrontal cortex rsFC. In a region-of-interest longitudinal analysis, rsFC between these regions increased post-treatment (p < 0.05FDR). In whole-brain analyses, BA25 ~ paracentral lobule and rACC ~ paracentral lobule connectivities decreased post-treatment. Whole-brain baseline rsFC with SCC did not predict clinical improvement.
Conclusions.
rsFC features of rACC and aSCC, but not BA25, correlated inversely with baseline depression severity, and increased following CBT. Subdivisions of SCC involved in top-down emotion regulation may be more involved in cognitive interventions, while BA25 may be more informative for interventions targeting bottom-up processing. Results emphasize the importance of subdividing the SCC in connectivity analyses.
Brain health diplomacy aims to influence the global policy environment for brain health (i.e. dementia, depression, and other mind/brain disorders) and bridges the disciplines of global brain health, international affairs, management, law, and economics. Determinants of brain health include educational attainment, diet, access to health care, physical activity, social support, and environmental exposures, as well as chronic brain disorders and treatment. Global challenges associated with these determinants include large-scale conflicts and consequent mass migration, chemical contaminants, air quality, socioeconomic status, climate change, and global population aging. Given the rapidly advancing technological innovations impacting brain health, it is paramount to optimize the benefits and mitigate the drawbacks of such technologies.
Objective:
We propose a working model of Brain health INnovation Diplomacy (BIND).
Methods:
We prepared a selective review using literature searches of studies pertaining to brain health technological innovation and diplomacy.
Results:
BIND aims to improve global brain health outcomes by leveraging technological innovation, entrepreneurship, and innovation diplomacy. It acknowledges the key role that technology, entrepreneurship, and digitization play and will increasingly play in the future of brain health for individuals and societies alike. It strengthens the positive role of novel solutions, recognizes and works to manage both real and potential risks of digital platforms. It is recognition of the political, ethical, cultural, and economic influences that brain health technological innovation and entrepreneurship can have.
Conclusions:
By creating a framework for BIND, we can use this to ensure a systematic model for the use of technology to optimize brain health.
The transmission rate of methicillin-resistant Staphylococcus aureus (MRSA) to gloves or gowns of healthcare personnel (HCP) caring for MRSA patients in a non–intensive care unit setting was 5.4%. Contamination rates were higher among HCP performing direct patient care and when patients had detectable MRSA on their body. These findings may inform risk-based contact precautions.
We studied the association between chlorhexidine gluconate (CHG) concentration on skin and resistant bacterial bioburden. CHG was almost always detected on the skin, and detection of methicillin-resistant Staphylococcus aureus, carbapenem-resistant Enterobacteriaceae, and vancomycin-resistant Enterococcus on skin sites was infrequent. However, we found no correlation between CHG concentration and bacterial bioburden.
Applied psychologists commonly use personality tests in employee selection systems because of their advantages regarding incremental criterion-related validity and less adverse impact relative to cognitive ability tests. Although personality tests have seen limited legal challenges in the past, we posit that the use of personality tests might see increased challenges under the Americans with Disabilities Act (ADA) and the ADA Amendments Act (ADAAA) due to emerging evidence that normative personality and personality disorders belong to common continua. This article aims to begin a discussion and offer initial insight regarding the possible implications of this research for personality testing under the ADA. We review past case law, scholarship in employment law, Equal Employment Opportunity Commission (EEOC) guidance regarding “medical examinations,” and recent literature from various psychology disciplines—including clinical, neuropsychology, and applied personality psychology—regarding the relationship between normative personality and personality disorders. More importantly, we review suggestions proposing the five-factor model (FFM) be used to diagnose personality disorders (PDs) and recent changes in the Diagnostic and Statistical Manual of Mental Disorders (DSM). Our review suggests that as scientific understanding of personality progresses, practitioners will need to exercise evermore caution when choosing personality measures for use in selection systems. We conclude with six recommendations for applied psychologists when developing or choosing personality measures.
Objective: Few studies have investigated the assessment and functional impact of egocentric and allocentric neglect among stroke patients. This pilot study aimed to determine (1) whether allocentric and egocentric neglect could be dissociated among a sample of stroke patients using eye tracking; (2) the specific patterns of attention associated with each subtype; and (3) the nature of the relationship between neglect subtype and functional outcome. Method: Twenty acute stroke patients were administered neuropsychological assessment batteries, a pencil-and-paper Apples Test to measure neglect subtype, and an adaptation of the Apples Test with an eye tracking measure. To test clinical discriminability, twenty age- and education-matched control participants were administered the eye tracking measure of neglect. Results: The eye tracking measure identified a greater number of individuals as having egocentric and/or allocentric neglect than the pencil-and-paper Apples Test. Classification of neglect subtype based on eye tracking performance was a significant predictor of functional outcome beyond that accounted for by the neuropsychological test performance and Apples Test neglect classification. Preliminary evidence suggests that patients with no neglect symptoms had superior functional outcomes compared with patients with neglect. Patients with combined egocentric and allocentric neglect had poorer functional outcomes than those with either subtype. Functional outcomes of patients with either allocentric or egocentric neglect did not differ significantly. The applications of our findings, to improve neglect detection, are discussed. Conclusion: Results highlight the potential clinical utility of eye tracking for the assessment and identification of neglect subtype among stroke patients to predict functional outcomes. (JINS, 2019, 25, 479–489)
Cognitive behavioral therapy (CBT) is an effective treatment for many patients suffering from major depressive disorder (MDD), but predictors of treatment outcome are lacking, and little is known about its neural mechanisms. We recently identified longitudinal changes in neural correlates of conscious emotion regulation that scaled with clinical responses to CBT for MDD, using a negative autobiographical memory-based task.
Methods
We now examine the neural correlates of emotional reactivity and emotion regulation during viewing of emotionally salient images as predictors of treatment outcome with CBT for MDD, and the relationship between longitudinal change in functional magnetic resonance imaging (fMRI) responses and clinical outcomes. Thirty-two participants with current MDD underwent baseline MRI scanning followed by 14 sessions of CBT. The fMRI task measured emotional reactivity and emotion regulation on separate trials using standardized images from the International Affective Pictures System. Twenty-one participants completed post-treatment scanning. Last observation carried forward was used to estimate clinical outcome for non-completers.
Results
Pre-treatment emotional reactivity Blood Oxygen Level-Dependent (BOLD) signal within hippocampus including CA1 predicted worse treatment outcome. In contrast, better treatment outcome was associated with increased down-regulation of BOLD activity during emotion regulation from time 1 to time 2 in precuneus, occipital cortex, and middle frontal gyrus.
Conclusions
CBT may modulate the neural circuitry of emotion regulation. The neural correlates of emotional reactivity may be more strongly predictive of CBT outcome. The finding that treatment outcome was predicted by BOLD signal in CA1 may suggest overgeneralized memory as a negative prognostic factor in CBT outcome.