We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The recommended first-line treatment for insomnia is cognitive behavioral therapy for insomnia (CBTi), but access is limited. Telehealth- or internet-delivered CBTi are alternative ways to increase access. To date, these intervention modalities have never been compared within a single study. Further, few studies have examined (a) predictors of response to the different modalities, (b) whether successfully treating insomnia can result in improvement of health-related biomarkers, and (c) mechanisms of change in CBTi. This protocol was designed to compare the three CBTi modalities to each other and a waitlist control for adults aged 50–65 years (N = 100). Participants are randomly assigned to one of four study arms: in-person- (n = 30), telehealth- (n = 30) internet-delivered (n = 30) CBTi, or 12-week waitlist control (n = 10). Outcomes include self-reported insomnia symptom severity, polysomnography, circadian rhythms of activity and core body temperature, blood- and sweat-based biomarkers, cognitive functioning and magnetic resonance imaging.
From early on, infants show a preference for infant-directed speech (IDS) over adult-directed speech (ADS), and exposure to IDS has been correlated with language outcome measures such as vocabulary. The present multi-laboratory study explores this issue by investigating whether there is a link between early preference for IDS and later vocabulary size. Infants’ preference for IDS was tested as part of the ManyBabies 1 project, and follow-up CDI data were collected from a subsample of this dataset at 18 and 24 months. A total of 341 (18 months) and 327 (24 months) infants were tested across 21 laboratories. In neither preregistered analyses with North American and UK English, nor exploratory analyses with a larger sample did we find evidence for a relation between IDS preference and later vocabulary. We discuss implications of this finding in light of recent work suggesting that IDS preference measured in the laboratory has low test-retest reliability.
Animals under human care are exposed to a potentially large range of both familiar and unfamiliar humans. Human-animal interactions vary across settings, and individuals, with the nature of the interaction being affected by a suite of different intrinsic and extrinsic factors. These interactions can be described as positive, negative or neutral. Across some industries, there has been a move towards the development of technologies to support or replace human interactions with animals. Whilst this has many benefits, there can also be challenges associated with increased technology use. A day-long Animal Welfare Research Network workshop was hosted at Harper Adams University, UK, with the aim of bringing together stakeholders and researchers (n = 38) from the companion, farm and zoo animal fields, to discuss benefits, challenges and limitations of human-animal interactions and machine-animal interactions for animals under human care and create a list of future research priorities. The workshop consisted of four talks from experts within these areas, followed by break-out room discussions. This work is the outcome of that workshop. The key recommendations are that approaches to advancing the scientific discipline of machine-animal interactions in animals under human care should focus on: (1) interdisciplinary collaboration; (2) development of validated methods; (3) incorporation of an animal-centred perspective; (4) a focus on promotion of positive animal welfare states (not just avoidance of negative states); and (5) an exploration of ways that machines can support a reduction in the exposure of animals to negative human-animal interactions to reduce negative, and increase positive, experiences for animals.
Identifying thrombus formation in Fontan circulation has been highly variable, with reports between 17 and 33%. Initially, thrombus detection was mainly done through echocardiograms. Delayed-enhancement cardiac MRI is emerging as a more effective imaging technique for thrombus identification. This study aims to determine the prevalence of occult cardiac thrombosis in patients undergoing clinically indicated cardiac MRI.
Methods:
A retrospective chart review of children and adults in the Duke University Hospital Fontan registry who underwent delayed-enhancement cardiac MRI. Individuals were excluded if they never received a delayed-enhancement cardiac MRI or had insufficient data. Demographic characteristics, native heart anatomy, cardiac MRI measurements, and thromboembolic events were collected for all patients.
Results:
In total, 119 unique individuals met inclusion criteria with a total of 171 scans. The median age at Fontan procedure was 3 (interquartile range 1, 4) years. The majority of patients had dominant systemic right ventricle. Cardiac function was relatively unchanged from the first cardiac MRI to the third cardiac MRI. While 36.4% had a thrombotic event by history, only 0.5% (1 patient) had an intracardiac thrombus detected by delayed-enhancement cardiac MRI.
Conclusions:
Despite previous echocardiographic reports of high prevalence of occult thrombosis in patients with Fontan circulation, we found very low prevalence using delayed-enhancement cardiac MRI. As more individuals are reaching adulthood after requiring early Fontan procedures in childhood, further work is needed to develop thrombus-screening protocols as a part of anticoagulation management.
Children with CHD or born very preterm are at risk for brain dysmaturation and poor neurodevelopmental outcomes. Yet, studies have primarily investigated neurodevelopmental outcomes of these groups separately.
Objective:
To compare neurodevelopmental outcomes and parent behaviour ratings of children born term with CHD to children born very preterm.
Methods:
A clinical research sample of 181 children (CHD [n = 81]; very preterm [≤32 weeks; n = 100]) was assessed at 18 months.
Results:
Children with CHD and born very preterm did not differ on Bayley-III cognitive, language, or motor composite scores, or on expressive or receptive language, or on fine motor scaled scores. Children with CHD had lower ross motor scaled scores compared to children born very preterm (p = 0.047). More children with CHD had impaired scores (<70 SS) on language composite (17%), expressive language (16%), and gross motor (14%) indices compared to children born very preterm (6%; 7%; 3%; ps < 0.05). No group differences were found on behaviours rated by parents on the Child Behaviour Checklist (1.5–5 years) or the proportion of children with scores above the clinical cutoff. English as a first language was associated with higher cognitive (p = 0.004) and language composite scores (p < 0.001). Lower median household income and English as a second language were associated with higher total behaviour problems (ps < 0.05).
Conclusions:
Children with CHD were more likely to display language and motor impairment compared to children born very preterm at 18 months. Outcomes were associated with language spoken in the home and household income.
Stress is well known to increase the severity of somatization and insomnia. A recent major stressor that could have influenced the severity of these presentations was world-wide COVID-19 Pandemic. Somatization is the physical expression of stress and emotional distress that can manifest itself throughout various corporal domains and can be a comorbidity to insomnia. Headaches represent some of the most common complaints associated with brain injuries and neurological disorders but are common in somaticized disorders as well. In large survey study we examined whether exercise was associated with severity of somatization and headaches. We hypothesized that both healthy individuals and those with insomnia who exercised during the pandemic would report less severe somatic symptoms and headaches than those who did not.
Participants and Methods:
A large survey was sent out to 4,073 individuals to measure their experience in numerous domains during the COVID-19 pandemic. This survey included a short symptom questionnaire used to measure somatization and the Insomnia Scale Index to measure insomnia. These questionnaires were administered along with a “yes or no” question on whether the participants exercised regularly in that period. A univariate ANOVA was performed to analyze the data to determine if exercise during the pandemic was beneficial in the reduction of somatic symptoms and headache severity. Furthermore, these tests were run to determine if the effect was greater on those with insomnia.
Results:
The effect of insomnia and exercise on total somatic symptoms were significant at F(1, 3445)=650.5, p<0.001 and F(1, 3445)=26.1, p<0.001, respectively. For reported headache severity, there was a significant effect of exercise F(1, 4073)=14.5, p<0.001 and insomnia F(1, 4073)=160.5, p<0.001; therefore, those who exercised reported less severe headaches and those who suffered from insomnia reported more severe somatic symptoms. This meant that those who exercised reported less severe somatization and headaches than those who didn’t and those with insomnia reported more severe somatization and headaches than healthy individuals. However, the interaction between exercise and insomnia on overall somatization severity was not significant at F(1, 3445)=3.4, p=0.066 nor for reported headache severity F(1, 4073)=0.81, p=0.370. Despite there not being a significant interaction, the benefit of exercise was slightly greater on healthy individuals than those with insomnia.
Conclusions:
Those with insomnia reported more severe headaches and overall somatic symptoms than non-insomniacs regardless of whether they exercised or not. Exercise did make a difference on the reported severity of headaches and somatization in both groups; however, the benefit of exercise on headaches and somatization was greater in individuals who do not suffer from insomnia. Thus, exercise was noted to be beneficial to those in the general population and those suffering from insomnia as it can potentially reduce the severity of somatization and headaches. Of course, this research was cross sectional and correlational, so the directionality of the effects cannot be inferred. For future research, it would be instrumental to use experimental methods to help determine the duration and type of exercise that may optimize its potential benefits on headaches and somatic symptoms.
Resiliency has been shown to attenuate and even protect against cognitive impairment from mental and physical stressors. Recently, it has been demonstrated that individuals who score high in psychological resilience tend to have less impairment following a mTBI.The COVID-19 pandemic proved to be an uncertain time for many. Periods of isolation, unemployment, and of course, sickness, meant more time at home. The partial or complete breakdown of an individual’s day-to-day routine paired with the stress of the pandemic has reinforced the need for psychological resilience. This analysis investigates the relationship between self-reported routine adherence and an individual’s corresponding psychological resilience. We hypothesize that individuals who maintained a structured daily routine during the pandemic will have higher levels of psychological resilience, enabling them to better handle periods of extreme stress.
Participants and Methods:
8963 English-speaking adults (18-92 years old; 59.5% female) from across the U.S. completed an online, monthly cross-sectional (∼1000 participants per month), battery of questions that included the Connor-Davidson Resilience Scale (CD-RISC), and a self-reported sleep and routine rating(s) between June 2020 and April 2021. We measured the level of an individual’s routine by adding the self-reported survey scores of waking at the same time and maintaining a routine throughout the day. Both questions were scored 0-4 (Likert-style) for a score range of 0 to 8; higher scores indicated a higher adherence to a daily structure. Weeknight sleep (Sun-Thurs) was a self-reported average of the hours of sleep obtained over the past 4 weeks. A two-way ANCOVA was used to analyze the effects that routine had on subsequent psychological resilience scores while controlling for average sleep duration.
Results:
A significant main effect routine on psychological resilience was found F(8,8953) =227, p=<.00001 after controlling for average reported weeknight sleep. An independent t-test was performed to determine the differences between those who fall above and below the average score (M= 5.1) for routine adherence. Individuals who were above average in adherence (M=71.1, SD=15.5) had significantly higher CD-RISC scores than individuals who did not (M=59.2, SD=16.7); t(9166)=35.1, p <0.001.
Conclusions:
Individuals who maintained a more structured day throughout the pandemic were more likely to score higher on psychological resilience assessments than those who did not. Chronic stress is known to contribute to the development and exacerbation of many common psychiatric conditions like anxiety and depression. These results suggest that having a regular routine may have positive effects on an individual’s ability to bounce back from stressful cognitive and psychological events. This relationship should be further investigated in clinical populations as a potential intervention or adjunctive treatment for common neuropsychiatric conditions.
Face-to-face administration is the “gold standard” for both research and clinical cognitive assessments. However, many factors may impede or prevent face-to-face assessments, including distance to clinic, limited mobility, eyesight, or transportation. The COVID19 pandemic further widened gaps in access to care and clinical research participation. Alternatives to face-to-face assessments may provide an opportunity to alleviate the burden caused by both the COVID-19 pandemic and longer standing social inequities. The objectives of this study were to develop and assess the feasibility of a telephone- and video-administered version of the Uniform Data Set (UDS) v3 cognitive batteries for use by NIH-funded Alzheimer’s Disease Research Centers (ADRCs) and other research programs.
Participants and Methods:
Ninety-three individuals (M age: 72.8 years; education: 15.6 years; 72% female; 84% White) enrolled in our ADRC were included. Their most recent adjudicated cognitive status was normal cognition (N=44), MCI (N=35), mild dementia (N=11) or other (N=3). They completed portions of the UDSv3 cognitive battery, plus the RAVLT, either by telephone or video-format within approximately 6 months (M:151 days) of their annual in-person visit, where they completed the same in-person cognitive assessments. Some measures were substituted (Oral Trails for TMT; Blind MoCA for MoCA) to allow for phone administration. Participants also answered questions about the pleasantness, difficulty level, and preference for administration mode. Cognitive testers provided ratings of perceived validity of the assessment. Participants’ cognitive status was adjudicated by a group of cognitive experts blinded to most recent inperson cognitive status.
Results:
When results from video and phone modalities were combined, the remote assessments were rated as pleasant as the inperson assessment by 74% of participants. 75% rated the level of difficulty completing the remote cognitive assessment the same as the in-person testing. Overall perceived validity of the testing session, determined by cognitive assessors (video = 92%; phone = 87.5%), was good. There was generally good concordance between test scores obtained remotely and in-person (r = .3 -.8; p < .05), regardless of whether they were administered by phone or video, though individual test correlations differed slightly by mode. Substituted measures also generally correlated well, with the exception of TMT-A and OTMT-A (p > .05). Agreement between adjudicated cognitive status obtained remotely and cognitive status based on in-person data was generally high (78%), with slightly better concordance between video/in-person (82%) vs phone/in-person (76%).
Conclusions:
This pilot study provided support for the use of telephone- and video-administered cognitive assessments using the UDSv3 among individuals with normal cognitive function and some degree of cognitive impairment. Participants found the experience similarly pleasant and no more difficult than inperson assessment. Test scores obtained remotely correlated well with those obtained in person, with some variability across individual tests. Adjudication of cognitive status did not differ significantly whether it was based on data obtained remotely or in-person. The study was limited by its’ small sample size, large test-retest window, and lack of randomization to test-modality order. Current efforts are underway to more fully validate this battery of tests for remote assessment. Funded by: P30 AG072947 & P30 AG049638-05S1
Chronic insomnia is a highly prevalent disorder affecting approximately one-in-three Americans. Insomnia is associated with increased cognitive and brain arousal. Compared to healthy individuals, those with insomnia tend to show greater activation/connectivity within the default mode network (DMN) of the brain, consistent with the hyperarousal theory. We investigated whether it would be possible to suppress activation of the DMN to improve sleep using a type of repetitive transcranial magnetic stimulation (rTMS) known as continuous theta burst stimulation (cTBS).
Participants and Methods:
Participants (n=9, 6 female; age=25.4, SD=5.9 years) meeting criteria for insomnia/sleep disorder on standardized scales completed a counterbalanced sham-controlled crossover design in which they served as their own controls on two separate nights of laboratory monitored sleep on separate weeks. Each session included two resting state functional magnetic resonance imaging (fMRI) sessions separated by a brief rTMS session. Stimulation involved a 40 second cTBS stimulation train applied over an easily accessible cortical surface node of the DMN located at the left inferior parietal lobe. After scanning/stimulation, the participant was escorted to an isolated sleep laboratory bedroom, fitted with polysomnography (PSG) electrodes, and allowed an 8-hour sleep opportunity from 2300 to 0700. PSG was monitored continuously and scored for standard outcomes, including total sleep time (TST), percentage of time various sleep stages, and number of arousals.
Results:
Consistent with our hypothesis, a single session of active cTBS produced a significant reduction of functional connectivity (p < .05, FDR corrected) within the DMN. In contrast, the sham condition produced no changes in functional connectivity from pre- to post-treatment. Furthermore, after controlling for age, we also found that the active treatment was associated with meaningful trends toward greater overnight improvements in sleep compared to the sham condition. First, the active cTBS condition was associated with significantly greater TST compared to sham (F(1,7)=14.19, p=.007, partial eta-squared=.67). Overall, individuals obtained 26.5 minutes more sleep on the nights that they received the active cTBS compared to the sham condition. Moreover, the active cTBS condition was associated with a significant increase in the percentage of time in rapid eye movement (REM%) sleep compared to the sham condition (F(1,7)=7.05, p=.033, partial eta-squared=.50), which was significant after controlling for age. Overall, active treatment was associated with an increase of 6.76% more of total sleep time in REM compared to sham treatment. Finally, active cTBS was associated with fewer arousals from sleep (t(8) = -1.84, p = .051, d = .61), with an average of 15.1 fewer arousals throughout the night than sham.
Conclusions:
Overall, these findings suggest that this simple and brief cTBS approach can alter DMN brain functioning in the expected direction and was associated with trends toward improved objectively measured sleep, including increased TST and REM% and fewer arousals during the night following stimulation. These findings emerged after only a single 40-second treatment, and it remains to be seen whether multiple treatments over several days or weeks can sustain or even improve upon these outcomes.
While there exist numerous validated neuropsychological tests and batteries to measure cognitive and behavioral capacities, the vast majority of these are time intensive and difficult to administer and score outside of the clinic. Moreover, many existing assessments may have limited ecological validity in some contexts (e.g., military operations). Therefore, we have been developing a novel approach to administering neuropsychological assessment using a virtual reality (VR) “game” that will collect simultaneously acquired multidimensional data that is synthesized by machine learning algorithms to identify neurocognitive strengths and weaknesses in a fraction of the time of typical assessment approaches. For our initial pilot project, we developed a preliminary VR task that involved a brief game-like military “shoot/no-shoot” task that collected data on hits, false alarms, discriminability, and response times under a context-dependent rule set. This prototype task will eventually be expanded to include a significantly more complex set of tasks with greater cognitive demands, sensor feeds, and response variables that could be modified to fit many other contexts. The objective of this project was to construct a rudimentary pilot version and demonstrate whether it could predict outcomes on standard neuropsychological assessments.
Participants and Methods:
To demonstrate proof-of-concept, we collected data from 20 healthy participants from the general population (11 male; age=24.8, SD=7.8) with high average intelligence (IQ = 112, SD=10.7). All participants completed the Wechsler Abbreviated Scale of Intelligence-II (WASI-II), and several neuropsychological tests including the ImPACT, the Attention and Executive Function modules of the Neuropsychological Assessment Battery (NAB), and the VR task. Initially, we used a prior dataset from 359 participants (n=191 mild traumatic brain injury; n=120healthy control; n=48 sleep deprived) to serve as a training sample for machine learning models. Based on these outcomes, we applied machine learning, as well as standard multiple regression approaches to predict neuropsychological outcomes in the 20 test participants.
Results:
In this limited study, the machine learning approach did not converge on a meaningful prediction due to the instability of the small sample. However, standard multiple linear regression using stepwise entry/deletion of the VR task variables significantly predicted neuropsychological performance. The VR task predicted WASI-II vocabulary (R=.457, p=.043), NAB Attention Index (R=.787, p=.001), and NAB Executive Function Index (R=.715, p=.002). Interestingly, these performances were generally as good or better than the predictions resulting from the ImPACT, a commercially available neuropsychological test battery, which correlated with WASI-II vocabulary (R=.557, p=.011), NAB Attention Index (R=.574, p=.008), and NAB Executive Function Index (R=.619, p=.004).
Conclusions:
Our pilot VR task was able to predict performances on standard neuropsychological assessment measures at a level comparable to that of a commercially available computerized assessment battery, providing preliminary evidence of concurrent validity. Ongoing work is expanding this rudimentary task into one involving greater complexity and nuance. As multivariate data integration models are incorporated into the tasks and extraction features, future work will collect data on much larger samples of individuals to develop and refine the machine learning models. With additional work this approach may provide an important advance in neuropsychological assessment methods.
Mild traumatic brain injury (mTBI) remains one of the most silent recurrent head injuries reported in the United States. mTBI accounts for nearly 75 percent of all traumatic brain injuries in the American population. Brain injury is often associated with impulsivity, but the association between resting state functional connectivity (rsFC) and impulsivity at multiple stages since time-since-injury (TSI) is unclear. We hypothesized that rsFC within the default mode network (DMN) would predict impulsivity across multiples stages of recovery in mild TBI.
Participants and Methods:
Participants healthy controls (HC: n=35 total [15 male, 20 female], age M=24.40, SD=5.95; mTBI: n=121 total [43 male; 78 female], age M=24.76, SD=7.48). Participants completed a cross-sectional study design at various post-injury time points ranging from (2W, 1M,3M,6M,12M). Participants a neuroimaging session and behavioral tasks including a psychomotor vigilance task. Impulsivity was assessed as a combination of false starts and impulsive responses on behavioral tasks. The neuroimaging session included a rsFC scan. To predict impulsivity from brain connectivity, we conducted a series of stepwise linear regression analyses with the 11 functional brain connections (extracted as Fisher’s z-transformed correlations between regions) as predictors and each of the 13 neurocognitive factor scores separately. We focus here on the outcomes for the impulsivity factor.
Results:
Results showed greater positive connectivity between the and Right Frontal Pole and the anterior cingulate cortex (ACC; seed) (ß = .158, t = 1.98, p = .049) which was associated with greater impulsivity. Individuals in the 2W group demonstrated one significant predictor (R = .632, R2 = .399, F = 5.32, p = .050). Largely, there was greater positive connectivity between the Right Frontal Pole and the ACC (seed) and (ß = .632, t = 2.31, p = .050) which was associated with higher impulsivity at the 2W time-since-injury. No predictors emerged for the 1M, 3M, or 6M conditions. However, individuals in the 12M group demonstrated two significant predictor connections (R = .497, R2 = .247, F = 5.73, p = .007). Overall, a linear combination of greater negative (anticorrelated) connectivity between the Right Frontal Pole and the mPFC (seed) (/? = -.576, t = -3.53, p = .002) and greater positive connectivity between the Paracingulate Cortex (seed) and the Left Lateral Prefrontal Cortex (ß = .368, t = 2.14, p = .039) was also associated with greater impulsivity in individuals with mTBI at 12M.
Conclusions:
These findings suggest functional connectivity between the anterior node of the DMN and prefrontal cortex regions involved in behavioral control was predictive of higher impulsivity in individuals with mTBI at 2W and 12M post injury, but not at other time frames. Interestingly, these connections differed at the two time points. Acutely, greater impulsivity was associated with greater connectivity among regions involved in error detection, exploration, and emotion. At one year, the connections involve regions associated with error monitoring and inhibitory processes. This may reflect compensatory strategy development during recovery.
Mild traumatic brain injury (mTBI) remains one of the most prevalent brain injuries, affecting approximately one-in-sixty Americans. Previous studies have shown an association between white matter integrity and aggression at chronic stages (either 6-months or 12 months post-mTBI) however, the association between white matter axonal damage, neuropsychological outcomes, and elevated aggression in multiple stages since time-since-injury (TSI) is unclear. We hypothesized that functional connectivity between the default mode network (DMN), a key brain network involved in cognitive, self-reflective, and emotional processes, and other cortical regions would predict elevated aggression and emotional disturbances across multiples stages of recovery in mild TBI.
Participants and Methods:
Participants healthy controls (HC: n=35 total [15 male, 20 female], age M=24.40, SD=5.95; mTBI: n=121 total [43 male; 78 female], age M = 24.76, SD=7.48). Participants completed a cross-sectional study design at specific post-injury time points ranging from (2W, 1M,3M,6M,12M). Participants completed a comprehensive neuropsychological battery and a neuroimaging session, including resting state functional connectivity (FC). Here, we focus on the FC outcomes for the DMN. During the neuropsychological assessment, participants completed tests that measured learning and memory, speed of information processing, executive function, and attention. To predict neuropsychological performance from brain connectivity, we conducted a series of stepwise linear regression analyses with the 11 functional brain connections (extracted as Fisher’s z-transformed correlations between regions) as predictors and each of the 13 neurocognitive factor scores separately.
Results:
Consistent with our hypothesis, one predictor materialized as significant (R = .187, R2 = .035, F = 5.55, p = .020) for the Total Sample. Largely, positive connectivity between Right Inferior Frontal Gyrus and the PCC (seed) was associated with increased aggression in the Total Sample of all participants (ß = .187, t = 2.36, p = .020). One predictor materialized as significant in Individuals the 2W group, (R = .719, R2 = .518, F = 8.58, p = .019). In general, greater negative (anticorrelated) connectivity between the Left Lateral Occipital Cortex (ß = -.719, t = -2.93, p = .019) and the PCC (seed) and was associated with greater aggression at 2W, but no predictors emerged at 1M or 3M. Individuals in the 6M group showed one significant predictor (R = .675, R2 = .455, F = 16.71, p = .001). Specifically, greater positive connectivity between the Right Lateral Occipital Cortex (/? = .675, t = 4.09, p = .001) and PCC (seed) was associated with greater aggression at 6M. No associations were evident at 12M.
Conclusions:
Overall, these findings suggest functional connectivity between the posterior hub of the DMN and cortical regions within the occipital cortex was predictive of higher aggression in individuals with mTBI. However, the direction of this connectivity differed at 2W versus 6M, suggesting a complex process of recovery that may contribute differentially to aggression in patients with mTBI. As these regions are involved in self-consciousness and visual perception, this may point toward future avenues for aiding in functional recovery of emotional dysregulation in patients with persistent post-concussion syndrome.
There is a pressing need for sensitive, non-invasive indicators of cognitive impairment in those at risk for Alzheimer’s disease (AD). One group at an increased risk for AD is APOEε4 carriers. One study found that cognitively normal APOEε4 carriers are less likely to produce low frequency (i.e., less common) words on semantic fluency tasks relative to non-carriers, but this finding has not yet been replicated. This study aims to replicate these findings within the Wake Forest ADRC clinical core population, and examine whether these findings extend to additional semantic fluency tasks.
Participants and Methods:
This sample includes 221 APOEε4 non-carriers (165 females, 56 males; 190 White, 28 Black/African American, 3 Asian; Mage = 69.55) and 79 APOEε4 carriers (59 females, 20 males; 58 White, 20 Black/African American, 1 Asian; Mage = 65.52) who had been adjudicated as cognitively normal at baseline. Semantic fluency data for both the animal task and vegetable task was scored for total number of items as well as mean lexical frequency (attained via the SUBTLEXus database). Demographic variables and additional cognitive variables (MMSE, MoCA, AMNART scores) were also included from the participants’ baseline visit.
Results:
APOEε4 carriers and non-carriers did not differ on years of education, AMNART scores, or gender (ps > 0.05). APOEε4 carriers were slightly younger and included more Black/African American participants (ps < 0.05). Stepwise linear regression was used to determine the variance in total fluency score and mean lexical frequency accounted for by APOEε4 status after including relevant demographic variables (age, sex, race, years of education, and AMNART score). As expected, demographic variables accounted for significant variance in total fluency score (p < 0.0001). Age accounted for significant variance in total fluency score for both the animal task (ß = -0.32, p <0.0001) and the vegetable task (ß = -0.29, p < 0.0001), but interestingly, not the lexical frequency of words produced. After accounting for demographic variables, APOEε4 status did not account for additional variance in lexical frequency for either fluency task (ps > 0.05). Interestingly, APOEε4 status was a significant predictor of total words for the vegetable semantic fluency task only (ß = 0.13, p = 0.01), resulting in a model that accounted for more variance (R2 = 0.25, F(6, 292) = 16.11, p < 0.0001) in total words than demographic variables alone (R2 = 0.23, F(5, 293) = 17.75, p < 0.0001).
Conclusions:
Unsurprisingly, we found that age, AMNART, and education were significant predictors of total word fluency. One unexpected finding was that age did not predict the lexical frequency - that is - regardless of age, participants tended to retrieve words of the same lexical frequency, which stands in contrast to the notion that retrieval efficiency of infrequent words declines with age. With regard to APOEε4, we did not replicate existing work demonstrating differences in lexical frequency and semantic fluency tasks for ε4 carriers and non-carriers; possibly due to differences in the demographic characteristics of the sample.
The workhouse was a central facet of the new poor law and the elderly – and aged men in particular – came to dominate workhouse populations. This article is the first to analyse a very large data set of almost 4,000 workhouses from all areas of England and Wales extracted from the I-CeM data set, which reveals the composition of workhouse residents on census night by age, gender, and geography between 1851 and 1911. Factors influencing the proportion of the elderly in the workhouse include the dependency ratio and internal migration, urbanisation and a commitment to institutions in cities, and the availability of outdoor relief and other avenues of support. Destitution, want of work, old age and illness propelled the elderly into the workhouse. The crusade against outrelief of the 1870s contributed to this increase, and, while the introduction of old age pensions reduced those over the age of 70, this did not prevent the ‘younger aged’ (those aged 60–69) from increasing.
Despite advances in cancer genomics and the increased use of genomic medicine, metastatic cancer is still mostly an incurable and fatal disease. With diminishing returns from traditional drug discovery strategies, and high clinical failure rates, more emphasis is being placed on alternative drug discovery platforms, such as ex vivo approaches. Ex vivo approaches aim to embed biological relevance and inter-patient variability at an earlier stage of drug discovery, and to offer more precise treatment stratification for patients. However, these techniques also have a high potential to offer personalised therapies to patients, complementing and enhancing genomic medicine. Although an array of approaches are available to researchers, only a minority of techniques have made it through to direct patient treatment within robust clinical trials. Within this review, we discuss the current challenges to ex vivo approaches within clinical practice and summarise the contemporary literature which has directed patient treatment. Finally, we map out how ex vivo approaches could transition from a small-scale, predominantly research based technology to a robust and validated predictive tool. In future, these pre-clinical approaches may be integrated into clinical cancer pathways to assist in the personalisation of therapy choices and to hopefully improve patient experiences and outcomes.
In recent years, a variety of efforts have been made in political science to enable, encourage, or require scholars to be more open and explicit about the bases of their empirical claims and, in turn, make those claims more readily evaluable by others. While qualitative scholars have long taken an interest in making their research open, reflexive, and systematic, the recent push for overarching transparency norms and requirements has provoked serious concern within qualitative research communities and raised fundamental questions about the meaning, value, costs, and intellectual relevance of transparency for qualitative inquiry. In this Perspectives Reflection, we crystallize the central findings of a three-year deliberative process—the Qualitative Transparency Deliberations (QTD)—involving hundreds of political scientists in a broad discussion of these issues. Following an overview of the process and the key insights that emerged, we present summaries of the QTD Working Groups’ final reports. Drawing on a series of public, online conversations that unfolded at www.qualtd.net, the reports unpack transparency’s promise, practicalities, risks, and limitations in relation to different qualitative methodologies, forms of evidence, and research contexts. Taken as a whole, these reports—the full versions of which can be found in the Supplementary Materials—offer practical guidance to scholars designing and implementing qualitative research, and to editors, reviewers, and funders seeking to develop criteria of evaluation that are appropriate—as understood by relevant research communities—to the forms of inquiry being assessed. We dedicate this Reflection to the memory of our coauthor and QTD working group leader Kendra Koivu.1
The deterrent workhouse, with its strict rules for the behavior of inmates and boundaries of authority of the workhouse officers, was a central expression of the Poor Law Amendment Act of 1834, known widely as the New Poor Law. This article explores for the first time the day-to-day experience of the power and authority of workhouse masters, matrons, other officers of the workhouse, and its Board of Guardians, and the resistance and agency of resentful inmates. Despite new sets of regulations to guide workhouse officers in the uniform imposition of discipline on residents, there was a high degree of regional diversity not only in the types of offenses committed by paupers but also in welfare policy relating to the punishments inflicted for disorderly and refractory behavior. And while pauper agency was significant, it should not be overstated, given the disparity in power between inmates and workhouse officials.
Wellness is often intimidating. Pursuing it requires significant commitment and carries emotional risk/vulnerability [1]. While fear can be a strong motivator, it can also be the reason one may not try or follow-through with a plan. In most cases, fear prevents us from being able to accomplish what we wish to. In the case of wellness, we found that due to the commitment many were challenged by the fear of not being able to achieve the results and goals they had set for themselves. For example, if one was never taught, or had modeled, how to live a life full of joy, love, and wellness, they will fear a life different than what they were taught, whether by observation or directly. Occasionally it can be more difficult and painful to break a pattern than to live in it [2]. The path to wellness will likely be unique for each and every one of us.