Hostname: page-component-89b8bd64d-shngb Total loading time: 0 Render date: 2026-05-06T13:15:36.339Z Has data issue: false hasContentIssue false

Predicting sign learning in hearing adults: The role of perceptual-motor (and phonological?) processes

Published online by Cambridge University Press:  02 April 2018

DAVID MARTINEZ*
Affiliation:
Georgia Institute of Technology
JENNY L. SINGLETON
Affiliation:
Georgia Institute of Technology
*
ADDRESS FOR CORRESPONDENCE David Martinez, School of Psychology, Georgia Institute of Technology, 654 Cherry St., Atlanta, GA 30332. E-mail: DMartinez35@gatech.edu
Rights & Permissions [Opens in a new window]

Abstract

The present study aimed to identify predictors of one aspect of sign language acquisition, sign learning, in hearing nonsigners. Candidate predictors were selected based on the theory that the observed relationship between phonological short-term memory and L2 lexical learning is due in part to common perceptual-motor processes. Hearing nonsigning adults completed a sign learning task, three assessments of short-term memory for movements (movement STM; two of which used sign-like stimuli), and two visuospatial STM tasks. The final sample included 103 adults, ranging between 18 and 33 years of age. All predictors were moderately to strongly correlated with the sign learning task and to each other. A series of regression analyses revealed that both movement and visuospatial STM uniquely contributed to the prediction of sign learning. These results suggest that perceptual-motor processes play a significant role in sign learning and raise questions about the role of phonological processing.

Information

Type
Original Article
Copyright
Copyright © Cambridge University Press 2018 
Figure 0

Figure 1. Example of a pseudosign depicting the major phonological parameters of handshape, location, movement, and orientation. The sign begins with the right, dominant hand holding a “Y” handshape, oriented with the palm facing the body, and in contact with the chest. Next, the dominant hand arcs away from the body and toward the right while simultaneously rotating the hand so that the palm faces the ground. The pseudosign ends in front of the model, in neutral space.

Figure 1

Figure 2. An example from the nonsign paired task. After seeing the full video of the target and either item 1A in the first block or 1B in the second, the response screen appears: “Were the gestures you just saw the same or different? Click to make your choice.” Pictures display the final position of a pseudosign.

Figure 2

Figure 3. An example of a practice Corsi trial, set size three.

Figure 3

Figure 4. An example of a practice PatSpan trial, set size three. The final frame depicts the response screen, instructing participants to “click on the green button when you are finished.”

Figure 4

Figure 5. Depiction of the sign learning task. (a) A pseudosign–word pair from the study portion of a trial. (b) An item from the test portion: the pseudosign (cue) is presented followed by the response screen showing all words from this set, in alphabetical order.

Figure 5

Table 1. Descriptive statistics for all tasks

Figure 6

Table 2. Bivariate (bottom half) and partial (controlling for order; upper half) correlations

Figure 7

Table 3. Partial correlations controlling for visuospatial STM and order

Figure 8

Table 4. Hierarchical regression analysis with SLT as the outcome variable

Figure 9

Table 5. Forward stepwise regression analysis with SLT as the outcome variable