Hostname: page-component-6766d58669-h8lrw Total loading time: 0 Render date: 2026-05-15T13:59:27.283Z Has data issue: false hasContentIssue false

Comprehending semantic and syntactic anomalies in text attributed to an LLM versus a human: An ERP study

Published online by Cambridge University Press:  08 January 2026

Xiaohui Rao
Affiliation:
Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong, China
Hanlin Wu
Affiliation:
Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong, China
Zhenguang G. Cai*
Affiliation:
Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong, China Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong, China
*
Corresponding author: Zhenguang G. Cai; Email: zhenguangcai@cuhk.edu.hk
Rights & Permissions [Opens in a new window]

Abstract

As people increasingly interact with large language models (LLMs), a critical question emerges: do humans process language differently when communicating with an LLM versus another human? While there is good evidence that people adapt comprehension based on their expectations toward their interlocutor in human–human interaction, human–computer interaction research suggests the adaptation to machines is often suspended until expectation violation occurs. We conducted two event-related potential experiments examining Chinese sentence comprehension, measuring neural responses to semantic and syntactic anomalies attributed to an LLM or a human. Experiment 1 revealed reduced N400 but larger P600 responses to semantic anomalies in LLM-attributed text than human-attributed one, suggesting participants anticipated semantic errors yet required increased composition/integration efforts. Experiment 2 showed enhanced P600 responses to LLM-attributed than human-attributed syntactic anomalies, reflecting greater reanalysis or integration difficulty in the former than in the latter. Notably, neural responses to LLM-attributed semantic anomalies (but not syntactic anomalies) were further modulated by participants’ belief about humanlike knowledge in LLMs, with a larger N400 and a smaller P600 in participants with stronger belief of humanlike knowledge in LLMs. These findings provide the first neurocognitive evidence that people develop mental models of LLM capabilities and adapt neural processing accordingly, offering theoretical insights aligned with multidisciplinary frameworks and practical implications for designing effective human–AI communication systems.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Table 1. An example of the Chinese sentences used in Experiment 1, with glossary and translation. The bolded phrases were the critical region. The spaces in the Chinese text indicate the segmentation of phrases that were presented one at a time during the experiment

Figure 1

Table 2. LME models for by-trial amplitude analysis across the centro-parietal sites

Figure 2

Figure 1. Grand average ERPs elicited by the semantically anomalous sentences and the semantically coherent sentences in the LLM and human conditions. Waveforms are illustrated with six centro-parietal sites C1, Cz, C2, CP1, CPz, and CP2.

Figure 3

Table 3. LME models for by-trial amplitude analysis of knowledge resemblance in the LLM condition

Figure 4

Figure 2. Mean ERP Amplitude in the LLM condition for high versus low knowledge resemblance across semantically anomalous and control conditions.

Figure 5

Table 4. An example of the Chinese sentences used in Experiment 2, with glossary and translation. The bolded phrase was the critical region. The spaces in the Chinese text indicate the segmentation of phrases that were presented one at a time during the experiment

Figure 6

Table 5. LME models for amplitude analysis across the centro-parietal sites

Figure 7

Figure 3. Grand average ERPs elicited by the syntactically anomalous sentences and syntactically well-formed sentences in the LLM and human conditions. Waveforms are illustrated with six centro-parietal sites C3, Cz, C4, P3, Pz, and P4.

Figure 8

Table 6. LME models for by-trial amplitude analysis of knowledge resemblance in the LLM condition