Hostname: page-component-89b8bd64d-z2ts4 Total loading time: 0 Render date: 2026-05-08T16:24:02.386Z Has data issue: false hasContentIssue false

Development and validation of a nonverbal consensus-based semantic memory paradigm in patients with epilepsy

Published online by Cambridge University Press:  15 April 2024

Edwina B. Tran
Affiliation:
Department of Neurology, University of California, San Francisco, CA, USA Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
Jet M.J. Vonk
Affiliation:
Department of Neurology, University of California, San Francisco, CA, USA Memory and Aging Center, University of California, San Francisco, CA, USA
Kaitlin Casaletto
Affiliation:
Department of Neurology, University of California, San Francisco, CA, USA Memory and Aging Center, University of California, San Francisco, CA, USA
Da Zhang
Affiliation:
Department of Neurology, University of California, San Francisco, CA, USA Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
Raphael Christin
Affiliation:
Department of Neurology, University of California, San Francisco, CA, USA Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
Siddharth Marathe
Affiliation:
Department of Neurology, University of California, San Francisco, CA, USA Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
Maria Luisa Gorno-Tempini
Affiliation:
Department of Neurology, University of California, San Francisco, CA, USA Memory and Aging Center, University of California, San Francisco, CA, USA
Edward F. Chang
Affiliation:
Weill Institute for Neurosciences, University of California, San Francisco, CA, USA Department of Neurological Surgery, University of California, San Francisco, CA, USA
Jonathan K. Kleen*
Affiliation:
Department of Neurology, University of California, San Francisco, CA, USA Weill Institute for Neurosciences, University of California, San Francisco, CA, USA
*
Corresponding author: Jonathan K. Kleen; Email: jon.kleen@ucsf.edu
Rights & Permissions [Opens in a new window]

Abstract

Objective:

Brain areas implicated in semantic memory can be damaged in patients with epilepsy (PWE). However, it is challenging to delineate semantic processing deficits from acoustic, linguistic, and other verbal aspects in current neuropsychological assessments. We developed a new Visual-based Semantic Association Task (ViSAT) to evaluate nonverbal semantic processing in PWE.

Method:

The ViSAT was adapted from similar predecessors (Pyramids & Palm Trees test, PPT; Camels & Cactus Test, CCT) comprised of 100 unique trials using real-life color pictures that avoid demographic, cultural, and other potential confounds. We obtained performance data from 23 PWE participants and 24 control participants (Control), along with crowdsourced normative data from 54 Amazon Mechanical Turk (Mturk) workers.

Results:

ViSAT reached a consensus >90% in 91.3% of trials compared to 83.6% in PPT and 82.9% in CCT. A deep learning model demonstrated that visual features of the stimulus images (color, shape; i.e., non-semantic) did not influence top answer choices (p = 0.577). The PWE group had lower accuracy than the Control group (p = 0.019). PWE had longer response times than the Control group in general and this was augmented for the semantic processing (trial answer) stage (both p < 0.001).

Conclusions:

This study demonstrated performance impairments in PWE that may reflect dysfunction of nonverbal semantic memory circuits, such as seizure onset zones overlapping with key semantic regions (e.g., anterior temporal lobe). The ViSAT paradigm avoids confounds, is repeatable/longitudinal, captures behavioral data, and is open-source, thus we propose it as a strong alternative for clinical and research assessment of nonverbal semantic memory.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of International Neuropsychological Society
Figure 0

Table 1. Demographic information for all groups. Age and Education expressed as median, range in parenthesis. Gender, race, and ethnicity expressed as percentage (y = year, M = male, F = female, NB = nonbinary, AIAN = american indian/alaska native, A = asian, B = black, M = more than one race, O = other, W = white, H = hispanic, NH = non-hispanic)

Figure 1

Figure 1. Non-verbal image-based semantic association assessments including ViSAT. A. Example trials from the classic PPT task (Howard, 1992) at left and the more recent modified CCT (Moore et al., 2022) at right. The layout above shows each stimulus image at the top and the answer choices below. B. Two example trials (rows) from the ViSAT task described in this manuscript, including fixation stage (left, 2–3 s jittered duration), stimulus stage (middle), and answer stage (right). Control and PWE participants experienced stimuli presented in isolation (middle) and advanced only after clicking it, ensuring attendance to the stimulus and enabling cognitive and behavioral time-locking of both stimulus and answer stages separately, as well as answer choice. Mturk workers experienced stimulus simultaneous with answers (right panels) in a similar manner as they did with PPT and CCT trials in A. C. Violin plots show distributions of the percent (%) consensus among Mturk workers (n = 54) of the top answer for each trial (dots) of the PPT task (n = 51 trials), CCT task (n = 32), and ViSAT task (n = 100). Notably, the probability of obtaining a consensus at chance (black lines) is 50% for PPT (undermining direct statistical comparison with CCT and ViSAT) and 25% for both CCT and ViSAT. Distributions illustrate a significant trend toward a higher PCons in the ViSAT compared to the CCT (p = 0.0488, Mann-Whitney U test). D. Percent of trials containing content from different semantic categories for the stimulus images for all ViSAT trials (N = 100). E. Comparable to D for trial answer choices (for trials in which there was variation of categories across trials, the Mturk consensus answer image was given precedence here). F. Comparable to D and E for the general semantic relationship between the stimuli to the answer choices.

Figure 2

Figure 2. ViSAT consensus breakdown and image feature similarity. A. Breakdown of percent of Mturk workers who chose each answer (Pcons in green). See Supplemental Figure 1 for more detail on refinement process during ViSAT development. B. Breakdown of answer proportions for each trial (n = 100), sorted by consensus answer proportion (Pcons). The majority of trials (n = 92) reached a Pcons above 90%. C. Visual feature similarity score distributions calculated using ResNet-18 on an image2vec embedding (based on shapes, colors, textures and other features; i.e., non-semantic). Image similarity comparison scores (0 = no similarity, 1 = perfectly similar) were made between stimuli images vs. consensus answers (blue), vs. non-consensus answers (orange), and as a control the top visually similar images for each stimulus (green). Similarity scores were no different between consensus and non-consensus conditions (p = 0.577, two-sample t-test) whereas the top visually similarity scores were significantly higher than the consensus condition (p < 0.001).

Figure 3

Figure 3. Trial-level correlation data between the percent consensus for ViSAT (Mturk, y-axis) versus healthy control participants (green; r = 0.541, p < 0.001, Spearman) and participants with epilepsy (magenta; r = 0.522, p < 0.001, Spearman).

Figure 4

Figure 4. ViSAT accuracy for individual subgroups. A. Violin plots show distributions of accuracy for each group, derived from the top (consensus) answers from Mturk normative data designated as the correct choices (dots=individual participants; white dots=medians; grey lines=interquartile ranges; black dots=temporal lobe(s) involved in seizure onset zone; grey dots=primary generalized epilepsy). The Mturk group showed significantly higher percent accuracy (relative to consensus; PCons) than the Control and PWE groups, and the PWE group showed lower PCons than the Control (**p < 0.001, *p = 0.019; two-sample t-tests). B. Correlation scatterplots show lack of correlation between individual accuracy versus age (left) or years of education (right) among any group (colors=groups as in A; p > 0.05 for all, Spearman; least squares lines shown for illustrative purposes only).

Figure 5

Figure 5. Response times in the ViSAT task. A. Trial-level correlation data (individual data points averaged across patients in each group) between the Mturk PCons versus response times to click the stimulus image showing no relation for either Control and PWE participants (left panel; p > 0.05 for both groups, Spearman). B. Increasing PCons (i.e., easier trials) were related to a faster response time for selecting an answer image (right panel; r = −0.561 and p < 0.001 for Control, and r = −0.546 and p < 0.001 for PWE, Spearman). C. Left panel shows distributions of response times for individual trials (averaged across all participants for each group), and right panel shows same data in distributions for individual patients (averaged across all trials for each participant). Longer response times were shown for the answer images than the stimulus images, and PWEs with epilepsy had longer response times in general (both p < 0.001, fixed effects from linear mixed effects model). Finally, an interaction was noted where, relative to Controls, PWEs with epilepsy took significantly longer to click the answer images than the stimulus (p < 0.001).

Supplementary material: File

Tran et al. supplementary material

Tran et al. supplementary material
Download Tran et al. supplementary material(File)
File 184.7 KB