3 results
85 Performance Consistency on a Measure of Sustained and Selective Attention
- Lauren M. Baumann, Keith P. Johnson, Lee Ashendorf
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, p. 286
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Attention concerns, particularly difficulties with focusing and regulating attention, are reported in diverse clinical contexts. The Ruff 2&7 Selective Attention Test (Ruff 2&7; Ruff & Allen, 1996) is a measure of sustained and selective attention that assesses automatic detection and effortful processing. The goal of this study was to create an internal consistency metric within this test and to determine cognitive predictors by evaluating associations with executive control of attention and other cognitive skills. It was hypothesized that those who are more consistent across Ruff 2&7 performance would have more robust executive functioning skills, particularly those related to regulating and directing attention and the planning and utilization of cognitive resources.
Participants and Methods:The current study examined a clinical sample of 98 United States veterans with a history of mild traumatic brain injury. After excluding invalid cases (n=24), the final sample consisted of 74 veterans (Age=38.5 (8.9) years old; 13.9 (2.2) years of education; 78% male; 82% white, 7% Black, 8% Hispanic, 2% Asian). A consistency score was defined as the absolute value of the intertrial change in target hits plus errors across each pair of trials of the same stimulus type (Automatic Detection, AD, and Controlled Search, CS). Hierarchical linear regression modeling was used to evaluate the relative contributions of memory and executive functions (Rey Auditory Verbal Learning Test, Delis-Kaplan Executive Function System Tower Test, phonemic fluency, Trail Making Test B) and subjective symptom report (PTSD Checklist for DSM-5, Barkley Adult ADHD Rating Scale for DSM-IV).
Results:The mean deviation scores for the two trial types were similar (AD mean=13.6, SD=5.9; CS mean=13.6, SD=5.3). In predicting consistency across AD trials, delayed recall contributed 11% unique variance (p=.013), while no other block was statistically significant. For CS trials, self-reported PTSD and inattention symptoms contributed a combined 20% of unique variance to the model (p=.007), while there were no statistically significant cognitive predictors in this model.
Conclusions:Contrary to expectation, executive function measures did not explain statistically significant variance in performance across either trial type. Less consistent performance on AD trials was associated with weaker verbal memory. Less consistent performance on CS trials, which theoretically require greater executive control, was not associated with any cognitive scores, but was associated with more severe self-reported psychological and inattention symptoms. These findings buttress the conceptual distinction between AD and CS trial types, and they point to both cognitive and non-cognitive underpinnings of performance consistency.
85 Use of Embedded Performance Validity Measures Using Verbal Fluency Tests in a Clinical Sample of Military Veterans
- Keith P Johnson, Lee Ashendorf, Lauren M Baumann
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 758-759
-
- Article
-
- You have access Access
- Export citation
-
Objective:
As neuropsychologists aim to collect valid data, maximize the utility of assessments, make effective use of time, and best serve patient populations, measurement of performance validity is considered a critical issue for the field. As effort may vary across an evaluation, including performance validity tests (PVTs) throughout the assessment is important. Incorporating embedded PVTs in addition to free standing PVTs can be particularly useful in this regard. COWAT and animal naming are commonly administered verbal fluency measures. While there have been past investigations into their potential for detecting invalid performance, they are limited, and more research is needed. Perhaps most promising, Sugarman and Axelrod (2015) described a logistic regression derived formula utilizing the combined raw scores of COWAT and animal naming. The current study aimed to investigate the use of embedded PVTs within COWAT and animal naming to provide further support for the use of embedded PVTs in these measures.
Participants and Methods:All subjects were from a mixed clinical sample comprising military veterans from two VA Medical Centers in the northeast U.S., who were referred for neuropsychological evaluation. Subjects deemed credible had zero PVT failures. Subjects were considered non-credible performers if they failed at least two out of a possible eight PVTs administered. Subjects who failed one PVT were excluded from the study (n = 53). The final sample consisted of 116 individuals with credible performance (Mean Age = 35.5, SD = 8.8; Mean Edu = 13.6, SD = 2; Mean Est. IQ = 106, SD = 7.9) and 94 individuals with psychometrically determined non-credible performance (Mean Age = 38.5, SD = 9.4; Mean Edu = 113, SD = 2.1; Mean Est. IQ = 101, SD = 8.7). Performance of COWAT and animals in detecting non-credible performances was evaluated through calculation of classification accuracy statistics and use of the logistic regression formulas reported in Sugarman and Axelrod (2015).
Results:For COWAT, the optimal cutoff was a raw score of <27 (specificity = 89%; sensitivity = 31%), and a T-score of <35 (specificity = 92%; sensitivity = 31%). For animal naming, optimal cutoffs were <16 for raw score (specificity = 92%, sensitivity = 38%) and <37 for T-score (specificity = 91%; sensitivity = 33%). The logistic regression formula based on raw scores for both COWAT and animal naming was inadequately sensitive at the recommended cutoff in this sample, but a coefficient of > .28 was revealed to be optimal (91% specificity; 42% sensitivity). When the formula for T-scores was used, a coefficient of > .38 was optimal (91% specificity; 28% sensitivity).
Conclusions:Results of the current research suggest that PVTs embedded within the commonly administered COWAT and animal naming verbal fluency tests can effectively detect low effort, in concordance with generally accepted standards. A logistic regression formula using raw scores in particular appears to be most effective, consistent with findings reported by Sugarman and Axelrod (2015).
Reliable Change on Neuropsychological Tests in the Uniform Data Set
- Brandon E. Gavett, Lee Ashendorf, Ashita S. Gurnani
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 21 / Issue 7 / August 2015
- Published online by Cambridge University Press:
- 03 August 2015, pp. 558-567
-
- Article
- Export citation
-
Longitudinal normative data obtained from a robust elderly sample (i.e., believed to be free from neurodegenerative disease) are sparse. The purpose of the present study was to develop reliable change indices (RCIs) that can assist with interpretation of test score changes relative to a healthy sample of older adults (ages 50+). Participants were 4217 individuals who completed at least three annual evaluations at one of 34 past and present Alzheimer’s Disease Centers throughout the United States. All participants were diagnosed as cognitively normal at every study visit, which ranged from three to nine approximately annual evaluations. One-year RCIs were calculated for 11 neuropsychological variables in the Uniform Data Set by regressing follow-up test scores onto baseline test scores, age, education, visit number, post-baseline assessment interval, race, and sex in a linear mixed effects regression framework. In addition, the cumulative frequency distributions of raw score changes were examined to describe the base rates of test score changes. Baseline test score, age, education, and race were robust predictors of follow-up test scores across most tests. The effects of maturation (aging) were more pronounced on tests related to attention and executive functioning, whereas practice effects were more pronounced on tests of episodic and semantic memory. Interpretation of longitudinal changes on 11 cognitive test variables can be facilitated through the use of reliable change intervals and base rates of score changes in this robust sample of older adults. A Web-based calculator is provided to assist neuropsychologists with interpretation of longitudinal change. (JINS, 2015, 21, 558–567)