Hostname: page-component-77f85d65b8-7lfxl Total loading time: 0 Render date: 2026-03-26T17:19:08.327Z Has data issue: false hasContentIssue false

Trapped in phonotactics: How exposure to Japanese phonology influences English listeners’ perception of onset stop sequences

Published online by Cambridge University Press:  29 April 2025

Allen Shamsi*
Affiliation:
Department of Linguistics, University of Florida, Gainesville, FL, USA
Ratree Wayland
Affiliation:
Department of Linguistics, University of Florida, Gainesville, FL, USA
*
Corresponding author: Allen Shamsi; Email: allenshamsiev@ufl.edu
Rights & Permissions [Opens in a new window]

Abstract

Phonotactic constraints on word-initial structures vary across languages: Slavic languages permit a wide range of sound sequences, while English prohibits stop-stop clusters and Japanese restricts any obstruent sequences. To examine the perceptual challenges faced by English monolinguals and English-speaking learners of Japanese due to acoustic similarities between native and non-native phonotactic patterns, we conducted a syllable counting experiment using non-words that mimic the phonotactic patterns of English, Japanese, and Slavic languages (e.g., /putata/, /pu̥tata/, and /ptata/). Our findings revealed that English monolinguals perceive Japanese-like tokens as having three syllables less often compared to Slavic-like sequences, indicating that the acoustics of linguistic input modulate the perception of sequences unattested in native phonotactics. Conversely, learners of Japanese showed sensitivity to underlying voiceless vowels, reflecting their integration of acoustic details with a learned phonotactic grammar that prohibits stop-stop onsets. However, exposure to the acoustics of Japanese voiceless vowels did not improve L2 listeners’ ability to perceive the Slavic-like clusters without an epenthetic vowel, suggesting that the acquired phonotactic constraint modulates the perception of acoustic input for L2 learners. Overall, the study underscores the intricate role of linguistic experience, providing insights for future research on L2 speech perception.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Table 1. Experimental stimuli

Figure 1

Table 2. Summary of the mixed-effects logistic regression model for 3 vs. 2 syllable response proportions (N = 7162), with reference levels for fixed factors: group = ML, language = English, and condition = control

Figure 2

Figure 1. Comparison of response proportions between ML and AJL groups across stimulus language patterns and stimulus condition types during syllable judgment task. Higher response proportions reflect a higher preference of 3 over 2 syllables. The error bars represent ±1 SE.

Figure 3

Table 3. Summary of the linear mixed-effects model for transformed response times (N = 6736), with reference levels for fixed factors: group = ML, language = English, and condition = control

Figure 4

Figure 2. Transformed response times (in sec) comparison for ML and AJL groups by stimulus language pattern and stimulus condition types in a syllable judgment task. The error bars represent ±1 SE.

Supplementary material: File

Shamsi and Wayland supplementary material

Shamsi and Wayland supplementary material
Download Shamsi and Wayland supplementary material(File)
File 35.7 KB