Hostname: page-component-89b8bd64d-4ws75 Total loading time: 0 Render date: 2026-05-08T06:24:51.904Z Has data issue: false hasContentIssue false

Performance pay and non-native language comprehension: Can we learn to understand better when we’re paid to listen?

Published online by Cambridge University Press:  09 August 2023

Chasen Afghani
Affiliation:
Department of Linguistics, University of Oregon, Eugene, OR, USA
Melissa M. Baese-Berk*
Affiliation:
Department of Linguistics, University of Oregon, Eugene, OR, USA Department of Linguistics, University of Chicago, Chicago, IL, USA
Glen R. Waddell
Affiliation:
Department of Economics, University of Oregon, Eugene, OR, USA IZA Bonn, Bonn, Germany
*
Corresponding author: Melissa M. Baese-Berk; Email: mbaesebe@uoregon.edu
Rights & Permissions [Opens in a new window]

Abstract

Non-native speech is difficult for native listeners to understand. While listeners can learn to understand non-native speech after exposure, it is unclear how to optimize this learning. Experimental subjects transcribed non-native speech and were paid either a flat rate or based on their performance. Participants who were paid based on performance demonstrated improved performance overall and faster learning than participants who were paid a flat rate. These results suggest that exposure alone is not sufficient to optimize learning of non-native speech and that current models of this process must be revised to account for the effects of motivation and incentive.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press
Figure 0

Table 1. Target sentences. Notes: Stimuli for the experiment were sentences from the Hearing in Noise Test subsection (Nilsson et al., 1994) of the Archive of L1 and L2 Scripted and Spontaneous Transcripts and Recordings (Bradlow et al., 2017)

Figure 1

Figure 1. Mean Performance Across Target Sentences, by Treatment Status Here, We Plot the Mean Number of Correctly Identified Words in Each of the 104 Target sentences, Separately for Treatment Subjects (n = 25) and Control Subjects (n = 25).

Figure 2

Table 2. Estimates from six model specifications asking if performance incentives increase the number of words subjects correctly identify. In all specifications, we control for any systematic difference in the average performance on target sentences with target-sentence fixed effects. In (1)–(3), we allow subject performance to vary across order with a third-degree polynomial. In (4)–(6), we instead absorb any differences in the average performance by question order (i.e., the first, second, and third questions). Standard errors are reported in parentheses (i.e., *** 1%, ** 5%, and * 10%)

Figure 3

Table 3. Model specifications investigating whether performance improves more on longer sentences. In all specifications, we estimate standard errors allowing for clustering at the subject level, which we report in parentheses (i.e., *** 1%, ** 5%, and * 10%)

Figure 4

Table 4. Model specifications for average treatment/control differences across the order of target sentences to ask whether learning improves with incentives to perform. In all specifications, we estimate standard errors allowing for clustering at the subject level, which we report in parentheses (i.e., *** 1%, ** 5%, and * 10%)

Figure 5

Table 5. Model specifications asking if performance incentives induce treatment and control groups onto different learning trajectories. In all specifications, we estimate standard errors allowing for clustering at the subject level, which we report in parentheses (i.e., *** 1%, ** 5%, and * 10%)