Hostname: page-component-89b8bd64d-5bvrz Total loading time: 0 Render date: 2026-05-07T10:08:46.602Z Has data issue: false hasContentIssue false

Language, but not music, shapes tactile perception

Published online by Cambridge University Press:  24 June 2025

Tally McCormick Miller*
Affiliation:
Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Berlin, Germany Berlin School of Mind and Brain, Humboldt Universität zu Berlin , Berlin, Germany
Felix Blankenburg
Affiliation:
Berlin School of Mind and Brain, Humboldt Universität zu Berlin , Berlin, Germany Neurocomputation and Neuroimaging Unit, Department of Education and Psychology, Freie Universität Berlin , Berlin, Germany Einstein Center for Neurosciences Berlin, Berlin, Germany
Friedemann Pulvermüller
Affiliation:
Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Berlin, Germany Berlin School of Mind and Brain, Humboldt Universität zu Berlin , Berlin, Germany Einstein Center for Neurosciences Berlin, Berlin, Germany
*
Corresponding author: Tally McCormick Miller; Email: tally.miller@fu-berlin.de
Rights & Permissions [Opens in a new window]

Abstract

Prior research indicates that language stimuli, when co-presented with sensory inputs, can enhance perceptual discrimination. However, whether this facilitation is unique to spoken language as opposed to non-verbal auditory stimuli, such as musical patterns, remains unclear. To address this question, we used difficult-to-discriminate tactile stimulus patterns and paired them repeatedly either with specific verbal, language-like labels or with matched musical sequences. Crucially, we implemented a within-subject learning design with well-matched stimuli counterbalanced across subjects. This approach involved pairing specific tactile patterns with either linguistic labels or matched sequences of musical tones and exposing all subjects to both conditions. Participants’ discrimination ability of the tactile patterns presented in isolation was evaluated both before and after associative learning. Results demonstrated that after 5 days of learning, only the tactile pattern sets associated with language stimuli – not those paired with musical sequences – showed significant improvement in discrimination. These results indicate that spoken language may indeed have an advantage over other forms of auditory input in facilitating perceptual discrimination. We discuss the underlying mechanisms of this observed perceptual advantage.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Table 1. Acoustic wave forms and stimulus pairings. a, Acoustic wave forms of verbal labels (left) and musical sequences (right). Verbal labels conform to German phonotactic rules. Musical stimuli consisted of three notes lasting 200ms each in their presentation. b, An example of pairings between tactile stimuli (schematized in the top row of the tables) and the verbal, language-like pseudoword stimuli (in green, left) or the musical sequences (in blue, right). Pairings were determined by a random procedure and counterbalanced among subjects. For one pseudoword or tonal sequence, 28 of 40 presentations (70%) were always paired with a single pattern. The remaining 12 presentations were equally divided among the remaining three patterns within the set. Similarly, each of the remaining three pseudoword or tonal sequences were paired with one of the remaining tactile patterns in the same fashion. Note that pairings were counterbalanced between conditions across participants to eliminate any effects of stimulus features.

Figure 1

Figure 1. Stimuli and design. a, Vibrotactile-stimuli were presented on a 4x4-pin Braille-like piezoelectric display to the right middle finger. Each stimulus consisted of static (non-vibrating) pins and four pins that vibrated with sigmoidal 120 Hz. b, Tactile stimuli composing of two sets, where within each set all patterns overlap by 2 out of 4 pins. Patterns were equally similar within each of two sets, and because the sets were parity inverse, stimulus comparisons within both sets were equally difficult. For training, one set was paired with a set of 4 pseudowords for the Verbal condition, and another set was paired with a set of four short tonal sequences for the Musical conditions. Set pairings were counterbalanced across subjects. For the pseudowords, four pseudowords (not overlapping in onset, nucleus, or coda), were constructed, where pseudowords conformed to German phonetic rules following a consonant-consonant-vowel-consonant-consonant pattern. For the musical, tonal sequence consisting of three sequential tones to mimic the onset-nucleus-coda constellation of the verbal stimuli were created. During the implicit learning phase, pairs of tactile-stimuli and pseudowords were defined to form the Verbal conditions and pairs of tactile stimuli and tonal sequences were defined to form the Musical condition. All participants were exposed to both pairings equally. c, In the discrimination task, participants were sequentially presented with two vibrotactile-stimuli without the presentation of pseudowords or musical sequences in a two-alternative forced choice task and were asked to indicate if the two consecutive stimuli were identical or different.

Figure 2

Figure 2. Tactile Pattern discrimination performance before and after learning in context of co-presented verbal and musical stimuli. Bar graphs (left) and box plots (right) illustrating the distribution of scores across the “Pre” and “Post” blocks under different conditions. Left: Bar graph showing the average discrimination performance quantified as d’ scores of discriminating patterns from all participants, divided into the two conditions of interest: discrimination of patterns paired with verbal labels (‘Verbal’, in green), patterns paired with musical tone sequences (‘Musical’, in blue). Post-learning discrimination of tactile pattern sets paired with verbal labels significantly improved. There was no significant improvement for tactile sets paired with Musical stimuli. Right: Box plot segregating the discrimination performance d’ scores from all participants into the two conditions of interest: patterns paired with Verbal (in green) or with Musical (in blue). Central lines represent the median of the data. Boxes show the interquartile range (IQR), meaning the range of the middle 50% of the data. The horizontal lines represent the median. Means are denoted by solid black points, with error bars representing the 95% confidence intervals for the mean. Whiskers above and below extent to furthest data point within 1.5 times the IQR from the box. Outliers plotted as individual points. Discrimination performance for the tactile stimuli set paired with language-like stimuli was significantly better (p < 0.001) after implicit learning compared with discrimination performance of the same tactile stimuli sets before learning.

Figure 3

Table 2. Planned comparisons of pre- versus post-perceptual discrimination performance across conditions. The table shows estimated marginal means contrasts between post and pre timepoints within each condition (language and tones), derived from the linear mixed model. Values include effect estimates, standard errors (SE), t-ratios, and p-values (adjusted using Holm-Bonferroni correction). The language condition showed a significant improvement in discrimination performance from pre to post training (p < 0.001), while the tones condition did not reach statistical significance (p = 0.142).

Figure 4

Figure 3. Comparative analysis of discrimination performance under Whorfian and non-Whorfian conditions across two studies. Box plots showing side-by-side comparisons of participant discrimination d’ performance under Whorfian (left) and non-Whorfian (right) conditions, derived from two distinct experiments labeled as “Current Study” and Miller, et al. 2018. Each boxplot displays the distribution of participant d’ scores before (Pre) and after (Post) implicit associative learning. Data from the current study includes the conditions Verbal (dark green) and Musical (dark blue), contrasted with the conditions from a previous study which implemented Concordant pairings (light green) and Discordant pairings (light blue) with verbal stimuli. Average subject discrimination performance is showed before (Pre) and after (Post) learning. Boxes show the interquartile range (IQR), meaning the range of the middle 50% of the scores. The horizontal lines represent the median. Whiskers above and below extent to furthest data point within 1.5 times the IQR from the box. Means are denoted by solid black points, with error bars representing the 95% confidence intervals for the mean. There were no significant differences between the Labels and Concordant conditions, nor between the Musical and Discordant conditions, when comparing discrimination performance either before (Pre) or after (Post) the implicit association task, leading us to combine these conditions together for an ‘omnibus’ analysis.

Figure 5

Figure 4. Combined results from current and previous studies. Line graphs showing the discrimination performance (d’) before and after associative learning across both studies. The Whorfian condition (green) represents tactile patterns paired consistently with verbal labels, the non-Whorfian condition (blue) shows tactile patterns paired with musical sequences (current study, left) or discordant verbal labels (previous study, right. Error bars indicate standard error of the mean. Despite similar initial performance levels in the Whorfian and non-Whorfian conditions, significant improvement after training was observed only in the Whorfian condition in both studies (p < 0.001), while the non-Whorfian condition showed no significant improvement. This pattern suggests that the perceptual enhancement effect depends specifically on linguistic pairing rather than any cross-modal association.

Figure 6

Figure 5. Results from the collapsed analysis of data from present study and Miller, et al. 2018. Left: Illustration of the significant 2-way interaction of the factors Time (Pre vs Post) and Condition (Whorfian, non-Whorfian). Bar graph showing the d’ performance of subjects before and after 1 week of associative learning. Average discrimination d’ scores of subjects discriminating tactile sets presented in the Whorfian conditions are shown in green (Verbal Labels from present study and Concordant condition from Miller, et al. 2018), discrimination of sets presented in the non-Whorfian conditions are in blue (Musical Pairings from current paper and Discordant Pairings from Miller, et al. 2018). Right: Display of the significant 2-way interaction As box plots. Performance before and after training is shown, combining the Whorfian conditions in green and the non-Whorfian conditions in blue. Boxes show the interquartile range (IQR), meaning the range of the middle 50% of the scores. The horizontal lines represent the median. Whiskers above and below extent to furthest data point within 1.5 times the IQR from the box. Means are denoted by solid black points, with error bars representing the 95% confidence intervals for the mean. Central lines represent the median of the data. Discrimination performance for the tactile stimuli set associated with language-like stimuli in the “Whorfian” conditions was significantly better (p < 0.001) after implicit learning compared with discrimination performance of the same tactile stimuli sets before learning. Equal exposure but without associating to language-like stimuli showed no significant improvements among these same subjects.

Supplementary material: File

Miller et al. supplementary material

Miller et al. supplementary material
Download Miller et al. supplementary material(File)
File 21.3 KB