Hostname: page-component-89b8bd64d-72crv Total loading time: 0 Render date: 2026-05-06T08:44:26.160Z Has data issue: false hasContentIssue false

Matched or moved? Asymmetry in high- and low-level visual processing of motion events

Published online by Cambridge University Press:  10 August 2023

Xingyi Fu
Affiliation:
University of York, Department of Education, York, UK
Norbert Vanek*
Affiliation:
School of Cultures, Languages and Linguistics, University of Auckland, Auckland, New Zealand Experimental Research on Central European Languages Lab, Charles University, Prague, Czechia
Leah Roberts
Affiliation:
University of York, Department of Education, York, UK
*
Corresponding author: Norbert Vanek; Email: norbert.vanek@auckland.ac.nz
Rights & Permissions [Opens in a new window]

Abstract

Consensus on the extent to which cross-linguistic differences affect event cognition is currently absent. This is partly because cognitive influences of language have rarely been examined within speakers of different languages in tasks that manipulate the level of visual processing. This study presents a novel combination of a high-level approach upregulating the involvement of language, namely self-paced sentence-video verification, and a low-level visual detection method without language use, namely breaking continuous flash suppression (b-CFS) (Yang et al., 2014). The results point to cross-linguistic effects on event cognition by revealing variations in visual processing patterns of manner and path by English versus Mandarin Chinese speakers. Language specificity was found on both levels of processing. An asymmetry in response speed across tasks highlights an important difference between facilitation of detecting contrasts when recruitment of verbal labels is automatic, versus facilitation of verifying correspondences when labels are overt.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press
Figure 0

Figure 1. An example combination of a picture prime with a video quadruplet used in the b-CFS experiment. The sequence photos show (left to right) a full match between the video and the prime (a sequence picture of a man carrying a suitcase into the room), a manner mismatch (a video of a man pulling a suitcase into the room), a path mismatch (a video of a man carrying a suitcase out of the room), and a full mismatch with a visual oddity (a man carrying a dinosaur).

Figure 1

Figure 2. Illustration of the b-CFS procedure. The sequence of screens on the left demonstrates what was shown on the screen, and the one on the right shows the image that participants actually saw through the mirror stereoscope.

Figure 2

Table 1. Means and SDs of the reaction times (ms) taken to detect videos by Chinese and English monolinguals across four conditions

Figure 3

Figure 3. Log-transformed stimulus detection times in Experiment 1 shown per group (CH = Chinese, EN = English) and condition (FM = full match, FMI = full mismatch, MMI = manner mismatch, PMI = path mismatch). Box-plots show the medians and 50% of the log RTs within the boxes. Violin and raincloud plots complement visualization by showing the data distribution pattern in each condition and group.

Figure 4

Figure 4. Illustration of the sentence-video verification procedure. A fixation cross is followed by a sentence prime (self-paced) and subsequently by a video.

Figure 5

Table 2. Means and SDs of the reaction times (ms) taken to recognize (mis)matches by Chinese and English monolinguals across the four conditions

Figure 6

Figure 5. Log-transformed sentence-video verification times for the correct responses in Experiment 2 shown per group (CH = Chinese, EN = English) and condition (FM = full match, FMI = full mismatch, MMI = manner mismatch, PMI = path mismatch). Box-plots show the medians and 50% of the log RTs within the boxes. Violin and raincloud plots complement visualization by showing the data distribution pattern in each condition and group.