Hostname: page-component-8448b6f56d-gtxcr Total loading time: 0 Render date: 2024-04-19T04:26:37.974Z Has data issue: false hasContentIssue false

Predicting sign learning in hearing adults: The role of perceptual-motor (and phonological?) processes

Published online by Cambridge University Press:  02 April 2018

DAVID MARTINEZ*
Affiliation:
Georgia Institute of Technology
JENNY L. SINGLETON
Affiliation:
Georgia Institute of Technology
*
ADDRESS FOR CORRESPONDENCE David Martinez, School of Psychology, Georgia Institute of Technology, 654 Cherry St., Atlanta, GA 30332. E-mail: DMartinez35@gatech.edu
Rights & Permissions [Opens in a new window]

Abstract

The present study aimed to identify predictors of one aspect of sign language acquisition, sign learning, in hearing nonsigners. Candidate predictors were selected based on the theory that the observed relationship between phonological short-term memory and L2 lexical learning is due in part to common perceptual-motor processes. Hearing nonsigning adults completed a sign learning task, three assessments of short-term memory for movements (movement STM; two of which used sign-like stimuli), and two visuospatial STM tasks. The final sample included 103 adults, ranging between 18 and 33 years of age. All predictors were moderately to strongly correlated with the sign learning task and to each other. A series of regression analyses revealed that both movement and visuospatial STM uniquely contributed to the prediction of sign learning. These results suggest that perceptual-motor processes play a significant role in sign learning and raise questions about the role of phonological processing.

Type
Original Article
Copyright
Copyright © Cambridge University Press 2018 

In 2013, American Sign Language (ASL) was the third most frequently taught second language (L2) in US schools of higher education (Goldberg, Looney, & Lusin, Reference Goldberg, Looney and Lusin2015). Despite this, there exists a paucity of research on the cognitive processes involved in L2 sign language learning by hearing individuals (or deaf individuals for that matter; for the state of the field, see Pichler & Koulidobrova, Reference Pichler, Koulidobrova, Marschark, Spencer, Marschark and Spencer2016). Given the popularity of ASL as an L2, the practical importance of investigating L2 sign learning is evident; however, this kind of research is also important for theory development, as researching the processes involved in learning an L2 in a second modality can provide insight into those processes that are universal to all languages and those that are unique to a particular language modality.

Because the journey that is learning an L2 often begins with lexical learning, we chose to begin our own line of investigations by identifying predictors of sign learning in hearing nonsigners. As a component of language acquisition, lexical learning is related to grammar acquisition (for a review, see Bates & Goldman, Reference Bates and Goodman1997), L2 class performance (Cooper, Reference Cooper1964; Krug, Shafer, Dardick, Magalis, & Parenté, Reference Krug, Shafer, Dardick, Magalis and Parenté2002), and language learning aptitude (Cooper, Reference Cooper1964; Li, Reference Li2015). Moreover, there is a large body of research on word learning in spoken languages that can be drawn upon.

One factor that figures prominently in the prediction of word learning is phonological short-term memory (STM). In the multicomponent model of working memory, phonological STM is served by the phonological loop, a system composed of a temporary phonological store and an articulatory rehearsal mechanism that aids in maintaining phonological representations (Baddeley, Reference Baddeley2012). Because the phonological store deals with abstract phonological information, it is further theorized as amodal, that is, capable of maintaining phonological information from any language, whether spoken or signed (Baddeley, Reference Baddeley, Wen, Mota, McNeill, Bunting and Engle2015; Baddeley, Gathercole, & Papagno, Reference Baddeley, Gathercole and Papagno1998). However, the literature also suggests that modality-specific processes play a significant role. Below, we consider this evidence and provide our own interpretation, which guided the research described herein.

PHONOLOGICAL STM AND LEXICAL LEARNING (IN SPOKEN LANGUAGES)

In spoken language research, phonological STM is typically operationalized as the number of phonological items (e.g., digits, letters, words, or nonsense syllables) that one can recall after a brief retention interval. Despite their simplicity (Marshalek, Lohman, & Snow, Reference Marshalek, Lohman and Snow1983), a number of studies have found that measures of phonological STM serve as predictors of native and L2 word learning in children (Gathercole & Baddeley, Reference Gathercole and Baddeley1989; Gathercole, Hitch, Service, & Martin, Reference Gathercole, Hitch, Service and Martin1997; Gathercole, Willis, Baddeley, & Emslie, Reference Gathercole, Willis, Baddeley and Emslie1994; Masoura & Gathercole, Reference Masoura and Gathercole1999, Reference Masoura and Gathercole2005; Masoura, Gathercole, & Bablekou, Reference Masoura, Gathercole and Bablekou2004) as well as adults (Atkins & Baddeley, Reference Atkins and Baddeley1998; Gupta, Reference Gupta2003; Hummel & French, Reference Hummel and French2016; Martin & Ellis, Reference Martin and Ellis2012; O'Brien, Segalowitz, Collentine, & Freed, Reference O'Brien, Segalowitz, Collentine and Freed2006; O'Brien, Segalowitz, Freed, & Collentine, Reference O'Brien, Segalowitz, Freed and Collentine2007).

Gathercole (Reference Gathercole2006) hypothesized that the relationship between phonological STM and word learning exists because both rely on similar processes, namely, auditory, phonological, and speech-motor processes. She cautioned, however, that the relationship is strongest when items in the memory task consist of unfamiliar phonological structures such as pseudowords or words from an unknown L2; the more unfamiliar the phonologic material, the less long-term memory, in the form of lexical and phonetic knowledge, can mediate the relationship between phonological STM and word learning (Gathercole, Reference Gathercole1995; Gathercole, Pickering, Hall, & Peaker, Reference Gathercole, Pickering, Hall and Peaker2001; Hulme, Maughan, & Brown, Reference Hulme, Maughan and Brown1991; Thorn & Frankish, Reference Thorn and Frankish2005). Accordingly, nonwordFootnote 1 repetition is generally viewed as a better predictor of word learning than digit span (Baddeley et al., Reference Baddeley, Gathercole and Papagno1998; Gathercole et al., Reference Gathercole, Willis, Baddeley and Emslie1994) and the relationship between L2 word learning and phonological STM tasks employing L2 words as stimuli attenuates as individuals become proficient in the L2 (Masoura & Gathercole, Reference Masoura and Gathercole2005).

PERCEPTUAL-MOTOR PROCESSES IN PHONOLOGICAL STM AND LEXICAL LEARNING

The caveat that the relationship between phonological STM and word learning is attenuated to the degree that linguistic knowledge can be utilized suggests that phonological processing does not drive the relationship between phonological STM and word learning but rather that it acts as a nuisance variable. Converging evidence from behavioral and neuroimaging studies suggest that beyond general mnemonic and attentional processes, the relationship between phonological STM and word learning is due to common perceptual-motor processes.

PERCEPTUAL-MOTOR PROCESSES IN PHONOLOGICAL STM AND WORD LEARNING

In hearing individuals, phonological STM is disrupted by sound similarity (Baddeley, Reference Baddeley1966; Conrad & Hull, Reference Conrad and Hull1964), item length (Baddeley, Thomson, & Buchanan, Reference Baddeley, Thomson and Buchanan1975), articulatory suppression (Baddeley, Reference Baddeley1986), and irrelevant speech (Colle & Welsh, Reference Colle and Welsh1976) and sounds (such as tones and instrumental music; Jones & Macken, Reference Jones and Macken1993; Salamé & Baddeley, Reference Salamé and Baddeley1989), that is, by perceptual and motor manipulations (for similar arguments, see Jones, Hughes, & Macken, Reference Jones, Hughes and Macken2006; Wilson, Reference Wilson2001). Briefly, the similarity effect occurs when to-be-remembered stimuli sound similar (sets of similar sounding items [e.g., B, E, G, P, T] are not remembered as well as dissimilar items [e.g., D, X, I, L, Q]), suggesting that linguistic material is encoded in such a way that information about the surface form is retained. The length effect is assumed to occur because the stimuli are being rehearsed (overtly or covertly); items that take longer to articulate take more time to rehearse and therefore cannot be refreshed before they decay from a memory buffer. Articulatory suppression, when one is asked to repeat a short word or syllable during encoding, prevents articulatory rehearsal, and as a result, performance is lower compared to a nonsuppressed condition. Finally, irrelevant speech and nonspeech sounds affect performance on phonological STM tasks, possibly because, as Neath (Reference Neath2000) theorizes, some features of the irrelevant sounds are encoded during a STM task and serve as cues during recall; these cues are erroneous and therefore disrupt performance.

These same perceptual and motor manipulations have also been found to disrupt word learning, but only when it involves stimuli sufficiently different from the first language (L1; Papagno, Valentine, & Baddeley, Reference Papagno, Valentine and Baddeley1991; Papagno & Vallar, Reference Papagno and Vallar1992). What this suggests is that word learning in a language that is sufficiently different from the L1 heavily relies on perceptual-motor processes, and hence perceptual-motor manipulations negatively affect learning. When to-be-learned stimuli are known or derived from a language that is similar to the L1, then individuals can make use of lexical knowledge and associative-semantic processes, which are not affected by perceptual-motor manipulations.

PERCEPTUAL-MOTOR PROCESSES IN PHONOLOGICAL STM AND SIGN LEARNING

In the realm of signed languages, a series of studies by Wilson and colleagues revealed that STM for signs is also affected by similarity, length, suppression, and irrelevant stimuli (Wilson & Emmorey, Reference Wilson and Emmorey1997, Reference Wilson and Emmorey1998, Reference Wilson and Emmorey2003; Wilson & Fox, Reference Wilson and Fox2007). While the effects were similar, the means were different.

Signed languages, as visuospatial-manual languages, are composed of the simultaneous presentation of the major phonological parameters of location (i.e., place of articulation), handshape, orientation, and movement (see Figure 1; Brentari, Reference Brentari1998; Klima & Bellugi, Reference Klima and Bellugi1979). Accordingly, rather than elicit the similarity effect by presenting similar sounding stimuli, Wilson and Emmorey (Reference Wilson and Emmorey1997) presented visually similar signs to deaf individuals. Analogously, the suppression effect was evoked by asking deaf participants to produce a pseudosign during encoding (Wilson & Emmorey, Reference Wilson and Emmorey1997); the length effect by using signs with repetitive or relatively long movements (Wilson & Emmorey, Reference Wilson and Emmorey1998); and the irrelevant stimuli effect by displaying irrelevant pseudosigns and unnamable rotating figures (Wilson & Emmorey, Reference Wilson and Emmorey2003). Finally, Wilson and Fox (Reference Wilson and Fox2007) showed that the similarity, suppression, and length effects could be elicited in hearing nonsigners tasked with remembering pseudosigns.

Figure 1. Example of a pseudosign depicting the major phonological parameters of handshape, location, movement, and orientation. The sign begins with the right, dominant hand holding a “Y” handshape, oriented with the palm facing the body, and in contact with the chest. Next, the dominant hand arcs away from the body and toward the right while simultaneously rotating the hand so that the palm faces the ground. The pseudosign ends in front of the model, in neutral space.

With regard to sign learning, research is scant; however, two studies are germane. First, Williams and Newman (Reference Williams and Newman2016a) investigated the effect of perceptual similarity. Analogous to the similarity effect in word learning, visually distinct signs were learned more rapidly than visually similar signs. Second, Williams, Darcy, and Newman (Reference Williams, Darcy and Newman2016a) investigated the role that, among other factors, a phonological STM task, digit span, would play in the prediction of ASL vocabulary growth. A multiple linear regression analysis with ASL vocabulary growth as the outcome variable revealed that neither forward nor backward digit span were predictive. Williams et al. (Reference Williams, Darcy and Newman2016a) theorized that digit span was not predictive because in nonsigners, this task would not assess critical modality-specific, that is, perceptual-motor, processes. They cautioned, however, that due to the small sample size (n = 25), their results might not generalize.

PERCEPTUAL-MOTOR PROCESSES: EVIDENCE FROM NEUROIMAGING

Neuroimaging studies corroborate the behavioral studies reported above (Bavelier et al., Reference Bavelier, Newman, Mukherjee, Hauser, Kemeny, Braun and Boutla2008; Campbell, MacSweeney, & Waters, Reference Campbell, MacSweeney and Waters2008; Pa, Wilson, Pickell, Bellugi, & Hickok, Reference Pa, Wilson, Pickell, Bellugi and Hickok2008; Rönnberg, Rudner, & Ingvar, Reference Rönnberg, Rudner and Ingvar2004; Rudner, Reference Rudner2015; Rudner, Andin, & Rönnberg, Reference Rudner, Andin and Rönnberg2009; Williams, Darcy, & Newman, Reference Williams, Darcy and Newman2015, Reference Williams and Newman2016b). In general, researchers find differences in modality-specific areas, such that hearing individuals show greater activation of areas associated with auditory processing while deaf individuals exhibit greater activation of visual and motor areas. Both deaf and hearing individuals, however, show similarities in areas associated with language processing, such as the inferior temporal gyrus and posterior superior temporal gyrus.

The finding that deaf and hearing individuals show similar patterns of activation in language processing areas provides evidence of amodal language processing and possibly of an amodal phonological loop (Baddeley, Reference Baddeley2012; Vallar, Reference Vallar2006); however, the results of two longitudinal neuroimaging studies suggest that linguistic processing is only possible after a significant amount of L2 learning has occurred (Newman-Norlund, Frey, Petitto, & Grafton, Reference Newman-Norlund, Frey, Petitto and Grafton2006; Williams et al., Reference Williams, Darcy and Newman2016b). Across the two studies, the evidence indicated that initially, individuals processed L2 stimuli in a nonlinguistic fashion, with significant activation located primarily in respective sensorimotor areas; as learning progressed, however, there was increased left lateralization and activation of classic language processing areas, namely, the inferior temporal gyrus and posterior superior temporal gyrus. This transition from nonlinguistic to linguistic processing occurred in hearing individuals learning either a spoken or a signed L2, though the transition did occur more rapidly in spoken L2 learning (Newman-Norlund et al., Reference Newman-Norlund, Frey, Petitto and Grafton2006).

The slower transition from nonlinguistic to linguistic processing observed in spoken L2 learning may have occurred because, as Williams (Reference Williams2017) posits, hearing L2 sign learners face an additional hurdle in transitioning from nonlinguistic to linguistic processing, in that they first must “acquire the unique aspects of their new visual language modality before amodal linguistic representations can be accurately acquired” (p. v). In contrast, a hearing individual already has finely tuned auditory-perceptual and speech-motor skills to aid them in their learning of a spoken L2 (see Rosen, Reference Rosen2004). In sum, initial L2 learning, especially in an L2 that is significantly different from the L1, appears to rely heavily on modality-specific perceptual-motor processes.

Taken together, behavioral and neuroimaging studies indicate that individuals confronted with either a sign or a spoken language engage, when possible, similar linguistic-semantic processes but differ in the perceptual and motor processes employed: sign languages make use of visual perception and sign-motor processes while spoken languages make use of auditory perception and speech-motor processes. When it is not possible to rely on linguistic knowledge, such as during a phonological STM task where stimuli are drawn from an unknown language that is quite different from the L1, then the onus falls on modality-specific processes. Moreover, an abundance of research on word learning reveals that the same perceptual and motor manipulations that disrupt phonological STM also disrupt word learning. Thus, the relationship between phonological STM and lexical learning appears to be driven by common mnemonic, attentional, and critically, perceptual-motor processes.

THE PRESENT STUDY

The primary objective of the present study was to identify predictors of sign learning in hearing nonsigners. Predictors were selected based on the theory that the relationship between phonological STM and lexical learning is due in part to common perceptual-motor, not phonological, processes. Thus, we hypothesized that in hearing nonsigners, STM for movements (movement STM), whether signlike (nominally phonological STM) or not, would be related to sign learning, as both movement STM and sign learning involve encoding and binding biological motion and visuospatial features such as limb configurations (Moulton & Kosslyn, Reference Moulton and Kosslyn2009; Porro et al., Reference Porro, Francescato, Cettolo, Diamond, Baraldi, Zuiani and di Prampero1996; Stankov, Seizova-Cajić, & Roberts, Reference Stankov, Seizova-Cajić and Roberts2001; Vicary, Robbins, Calvo-Merino, & Stevens, Reference Vicary, Robbins, Calvo-Merino and Stevens2014; Vicary & Stevens, Reference Vicary and Stevens2014). In addition, as a subcomponent of movement STM, visuospatial STM should be related to sign learning, albeit, to a lesser extent, as it shares fewer processing components with sign learning than movement STM does. In order to test these hypotheses, three movement STM and two visuospatial STM tasks varying on a number of dimensions (e.g., response format, set size, and scoring procedure) were used. By ensuring that tasks varied in a number of ways, we attempted to reduce the likelihood that any relationship found was due to superficial similarities (i.e., common method variance).

A secondary objective was to identify which of the measures best predict sign learning. We did not have predictions about specific tasks, but we hypothesized that visuospatial STM would account for variance in sign learning performance over and above movement STM. Though observing human body movements necessarily engages visuospatial processing, a number of studies have reported a dissociation between visuospatial and biological motion processing (Ding et al., Reference Ding, Zhao, Wu, Lu, Gao and Shen2015; Peelen & Downing, Reference Peelen and Downing2007; Seemüller, Fiehler, & Rösler, Reference Seemüller, Fiehler and Rösler2011; Urgolites & Wood, Reference Urgolites and Wood2013a, Reference Urgolites and Wood2013b; Zihl & Heywood, Reference Zihl and Heywood2015). Studies investigating movement STM have found that memory for static-visual features (e.g., color and body configurations) is suppressed when biological motion processing is engaged (Ding et al., Reference Ding, Zhao, Wu, Lu, Gao and Shen2015; Vicary et al., Reference Vicary, Robbins, Calvo-Merino and Stevens2014; Vicary & Stevens, Reference Vicary and Stevens2014). It stands to reason that it would be difficult for one to encode and bind all of the features that distinguish one sign from another after a single exposure. In naturalistic settings, as well as in the paradigm used here (paired associate learning with multiple learning trials), however, individuals can shift their attention to different aspects as they are repeatedly exposed to target signs; movement STM tasks, by definition, do not offer this opportunity. Consequently, we expected that a more direct assessment of visuospatial STM would improve measurement accuracy and therefore account for a greater proportion of variance in sign learning than movement STM alone.

METHOD

Participants

One hundred and seven participants between the ages of 18 and 33 (M = 21.7, SD = 4.1, 55% female) were recruited from our university subject pool (55%) and surrounding area (45%), including other universities and local colleges. Participants recruited from the university subject pool were compensated with course credit; all others received up to $25. All participants were hearing, right-handed, fluent in English, had normal or corrected-to-normal vision, and were free of upper-body movement disorders. One participant reported having attended an ASL course but stated she was not fluent or currently enrolled; no other participant reported experience with ASL. We did not inquire about participants’ familiarity with fingerspelling.

Design and procedure

All tasks were administered in a single, private session lasting no more than 2 hr, including an optional break. Tasks were programmed in PsychoPy (Peirce, Reference Peirce2007) and presented on a MacBook Pro laptop. Two tasks required reproducing movements and were filmed using a Canon video camera mounted on a tripod so they could be scored later.

Participants completed three measures of movement STM and two visuospatial STM tasks; a pseudosign-word paired associate learning task; a questionnaire asking for demographic and achievement test scores; and for those participants recruited from the university subject pool, a record release form to access achievement test records. Unfortunately, few participants self-reported achievement scores, and those that we were able to access were generally in the top 10th percentile resulting in a highly restricted range of scores; as a result, achievement score data will not be reported here.

Written consent to participate in the study was always obtained at the beginning of the session; the questionnaire and, when applicable, the record release form at the end; the remaining tasks were administered using a Latin-square design, initially ordered as: sign learning task, Corsi block tapping task, nonsign paired task, movement span, pattern span, and nonsign repetition (see below for task descriptions). One to three practice items with feedback were provided for all tasks.

Instruments

Movement STM

MOVEMENT SPAN (MOVESPAN)

The MoveSpan task (Wu & Coulson, Reference Wu and Coulson2014) is a movement STM task requiring free recall of manual gestures that are difficult to verbally recode and do not necessarily follow the phonotactics of any particular sign language (e.g., there is no dominant hand and a number of movements are asymmetric, disyllabic, and/or place one of the hands fully behind the back; see Brentari, 2006).

In the MoveSpan, individuals are presented with three sets, each of one to five movements. After viewing a set, participants freely recall movements at their own pace by mirroring them. Raters, trained to a 0.80 interclass correlation consistency (2,1; Shrout & Fleiss, Reference Shrout and Fleiss1979) criterion, later scored participants’ recorded responses, awarding 1 point for every movement correctly recalled and 0.5 point for a movement that deviated from the target by one criterion (see Appendix A for scoring instructions). Movements within a set were fixed; however, sets were presented randomly. MoveSpan score was calculated as the total number of points earned across all sets, and thus the maximum score was 45 points.

NONSIGN REPETITION TASK (NSRT)

The NSRT (Mann, Marshall, Mason, & Morgan, Reference Mann, Marshall, Mason and Morgan2010) was designed to be analogous to nonword repetition. It consists of 40 pseudosigns that obey British Sign Language phonotactics but are themselves meaningless (Mann et al., Reference Mann, Marshall, Mason and Morgan2010, p. 15).

In the NSRT, participants view video clips of pseudosigns produced by a deaf fluent British Sign Language user, one at a time, and are expected to mirror the items immediately after presentation. Requiring participants to mirror the items rather than to reverse perspective deviates from the protocol followed by Mann et al. (Reference Mann, Marshall, Mason and Morgan2010) but was made to maintain consistency with the MoveSpan task and therefore to curtail errors due to participants confounding instructions across tasks. Because all single-handed signs appeared to be performed with the left hand by the model, mirroring these signs required participants use their right hand.

Items were presented randomly and participant performance was recorded and scored offline by raters trained to a 0.80 interclass correlation consistency criterion. Scoring was dichotomous, with 1 point awarded for correctly mirrored pseudosigns and no points for reproductions that differed from the target pseudosign by one parameter (see Appendix B for scoring instructions). Participant scores on the NSRT were calculated by summing the total points awarded, and the maximum score was 40.

NONSIGN PAIRED TASK (NSPT; FIGURE 2)

The NSPT was designed similarly to Bochner and colleagues’ ASL Discrimination Test (ASL-DT; Bochner, Christie, Hauser, & Searls, Reference Bochner, Christie, Hauser and Searls2011; Bochner et al., Reference Bochner, Samar, Hauser, Garrison, Searls and Sanders2016); however, our tasks differ in that the ASL-DT is intended as an assessment of ASL proficiency while the NSPT is used here as a movement STM task similar to nonword recognition. In spoken language research, nonword recognition correlates with nonword repetition and vocabulary development (Martin & Ellis, Reference Martin and Ellis2012; O'Brien et al., Reference O'Brien, Segalowitz, Collentine and Freed2006, Reference O'Brien, Segalowitz, Freed and Collentine2007).

Figure 2. An example from the nonsign paired task. After seeing the full video of the target and either item 1A in the first block or 1B in the second, the response screen appears: “Were the gestures you just saw the same or different? Click to make your choice.” Pictures display the final position of a pseudosign.

In the NSPT, participants view a target pseudosign and must judge whether a reproduction was the same or different from the target according to specified criteria, similar to the criteria used to score the NSRT (see Appendix B). Reproductions were designed to either faithfully reproduce the target or differ in one of the following parameters: movement, orientation, or handshape. A parametric approach, with the previously named parameters as categories, was used to create all pseudosigns and were later judged phonotactically permissible by a native ASL signer (the second author). A parametric approach to pseudosign construction has been used previously (e.g., Orfanidou, Adam, McQueen, & Morgan, Reference Orfanidou, Adam, McQueen and Morgan2009; Pa et al., Reference Pa, Wilson, Pickell, Bellugi and Hickok2008; Wilson & Fox, Reference Wilson and Fox2007) and allows for a great degree of control in manipulating item characteristics (see Mann et al., Reference Mann, Marshall, Mason and Morgan2010). Approximately 60% of reproductions were classified as different, with about 40% of those involving a change in movement and the remaining 60% equally divided between orientation and handshape.

Participants began the NSPT by viewing a brief (2 min, 44 s) instructional video. The video introduced participants to the task, instructed them on the judging criteria, and provided examples. Next, participants completed three practice items with a researcher providing feedback. After completing the practice items and receiving feedback, the critical trials began.

There were two blocks. In both blocks, participants viewed a target pseudosign produced by a hearing male nonsigner, immediately followed by one of two hearing female nonsigners “attempting” to copy the target pseudosign. The same target pseudosigns were used across the two blocks; however, a different female nonsigner performed the reproductions in each block. This was done to focus the participants’ attention on the intended parameters and to limit the degree to which slight variations in production may lead to erroneous decisions.

Next, a response screen prompted the participant to judge the reproduction as same or different from the target. As in Bochner et al.’s ASL-DT, for an individual to receive a point, both reproductions of a particular target (across the two blocks) needed to be correct. In this way, the chance of guessing was reduced from 50% to 25%. There were 55 paired-comparisons and so the maximum possible score was 55.

Visuospatial STM Tasks

CORSI BLOCK TAPPING TASK (CORSI; FIGURE 3)

The Corsi task (Milner, Reference Milner1971) is a dynamic visuospatial STM task that has been shown to load more heavily on spatial processing (Della Sala, Gray, Baddeley, Allamano, & Wilson, Reference Della Sala, Gray, Baddeley, Allamano and Wilson1999). Items consist of 4–9 rectangles flashing sequentially on the computer screen for 1000 ms each. After presentation of an item, participants were to immediately click the rectangles in the same order they had flashed. There were three blocks, each set length was randomly presented once within a block of trials, and therefore, each set length was presented three times. Participant scores were calculated using a partial scoring method in which a single point was awarded for each square correctly recalled in its serial position (Conway et al., Reference Conway, Kane, Bunting, Hambrick, Wilhelm and Engle2005). The maximum possible score was 117.

Figure 3. An example of a practice Corsi trial, set size three.

PATTERN SPAN (PATSPAN; FIGURE 4)

The PatSpan task is an adaptation of the Visual Pattern Test, which has been shown to load more heavily on static-visual processing (Della Sala et al., Reference Della Sala, Gray, Baddeley, Allamano and Wilson1999). Items in the PatSpan consisted of a 5 × 6 array of rectangles with 4 to 13 of them shaded black for 3000 ms. After presentation, a visual mask was presented for 300 ms followed by a blank 5 × 6 array. Participants reproduced the pattern of shaded rectangles they had just viewed by using the computer track pad to click on the rectangles presented in the array. There were three different items for each set length; items were the same for all participants though presentation was randomized. PatSpan scores were calculated by awarding a single point for every pattern correctly recalled; thus the maximum score was 30.

Figure 4. An example of a practice PatSpan trial, set size three. The final frame depicts the response screen, instructing participants to “click on the green button when you are finished.”

Sign Learning

SIGN LEARNING TASK (SLT; FIGURE 5)

The criterion variable, the SLT, is a paired-associate learning task employing a study-test learning procedure. Such tasks have been shown to result in long-term retention (Seibert, Reference Seibert1930; Thorndike, Reference Thorndike1908) and correlate with verbal ability and language aptitude (Cooper, Reference Cooper1964; Hundal & Horn, Reference Hundal and Horn1977; Kyllonen & Tirre, Reference Kyllonen and Tirre1988; Kyllonen & Woltz, Reference Kyllonen, Woltz, Kanfer, Cudeck, Kanfer and Cudeck1989). Moreover, utilizing paired-associate learning in the lab, as opposed to assessing vocabulary growth in ASL students, confers a greater degree of control, for example, in the amount of time and method of study.

Figure 5. Depiction of the sign learning task. (a) A pseudosign–word pair from the study portion of a trial. (b) An item from the test portion: the pseudosign (cue) is presented followed by the response screen showing all words from this set, in alphabetical order.

The task consisted of two sets of 12 visually presented pseudosign–English word pairs. Pseudosigns were used for the following reasons: as detailed above, doing so allows us to easily manipulate item characteristics; pseudosigns and English words can be paired randomly (for experimental purposes); and so that this task could be used in a future study with proficient signers. As with the NSPT, pseudosigns were created using a parametric approach and deemed phonotactically permissible by a fluent ASL signer (the second author); a hearing nonsigner produced all pseudosigns. All English words were five-letter high-frequency nouns selected from the SUBTLEX-US corpus (Brysbaert & New, Reference Brysbaert and New2009).

The SLT began with instructions introducing the task followed by a single example. Critical trials consisted of two blocks, each with 12 different pseudosign–English word pairs, for a total of 24 pseudosign–English pairs. Within each block, there were two study-test trials. During the study portion, each pseudosign was presented for an average duration of 3500 ms immediately followed by its randomly associated English word for 1000 ms. During testing, participants viewed a randomly selected pseudosign immediately followed by the response screen showing all 12 possible English response words. After making a selection by mouse click, the next pseudosign was shown and so on; feedback was never provided, and all pairs were studied and tested again during the second study-test trial, regardless of prior performance. The dependent variable was the total number of words correctly recalled across the two blocks; thus the maximum score was 48.

RESULTS

The data were assessed for univariate outliers using a cutoff z score of 3.29 (Field, Reference Field2013) and by graphical examination. Four participants achieved z scores at or above the cutoff on at least one variable, and evidence from a number of scatter plots indicated that these participants completed the study in a perfunctory manner or were not representative of the intended population. As a result, scores from these 4 individuals were removed from further analysis, leaving the final sample size at 103.

Descriptive statistics and internal reliability coefficients (Cronbach α) are provided in Table 1. The items used to calculate Cronbach α were derived as follows: MoveSpan, Corsi, and PatSpan reliabilities were each calculated by forming three subscores composed of one instance of each set length (see Engle, Tuholski, Laughlin, & Conway, Reference Engle, Tuholski, Laughlin and Conway1999); NSPT and NSRT reliabilities were calculated using each item as a score (i.e., as is typical); for the SLT, the “items” consisted of subscores derived by summing the points awarded for correctly identifying each instance of a particular word. All internal reliability coefficients were near or above 0.80, indicating acceptable reliabilities. In addition, the correlation between the two SLT blocks was strong (r = .617), providing further evidence of reliability.

Table 1. Descriptive statistics for all tasks

a Expressed as percent of score possible.

Note: MoveSpan = movement span; NSRT = nonsign repetition; NSPT = nonsign paired task, Corsi = Corsi block tapping task, PatSpan = pattern span; SLT = sign learning task.

Next, correlations were analyzed to assess the degree to which predictors correlated with the outcome variable and, as we were concerned with both observed and latent variables, to assess construct validity. Table 2 shows bivariate correlations among all tasks and SLT trials and, because there may have been an effect of task administration order, partial correlations controlling for variance due to order effects.

Table 2. Bivariate (bottom half) and partial (controlling for order; upper half) correlations

Note: Lower half are bivariate correlations, upper half are partial correlations controlling for task administration order. All correlations significant at .01 level. MoveSpan = movement span; NSRT = nonsign repetition; NSPT = nonsign paired task, Corsi = Corsi block tapping task, PatSpan = pattern span; SLT = sign learning task.

All predictors were positively related to the SLT, with bivariate correlations ranging between .400 and .535. Evidence of construct validity was evident from the strong correlations between the visuospatial STM tasks. With regard to the movement STM tasks, we found that these tasks positively correlated with each other; however, they were also moderately to strongly correlated with the visuospatial STM tasks. Cross examination of the bivariate and partial correlations provided in Table 2 indicated that administration order did not significantly affect the pattern of results.

After noting the relationships between visuospatial STM and all other tasks, we felt it prudent to conduct a partial correlation analysis to investigate the degree to which visuospatial processing drove these relationships (see Table 3). Despite the movement STM tasks varying in a number of ways (e.g., response format, set size, and scoring procedure), after partialing out the variance shared with the two visuospatial STM tasks, all movement STM tasks remained positively correlated with each other, indicating a significant amount of shared variance over and above that which is shared with visuospatial STM. The positive relationship between the movement STM tasks and SLT also remained significant. Further controlling for task order did not substantially change the pattern of results. Thus, in line with our expectations, these results indicated that the predictors could be classified as measures of two related but distinct constructs, namely, movement STM and visuospatial STM.

Table 3. Partial correlations controlling for visuospatial STM and order

Note: Lower half shows partial correlations, controlling for variance shared with Corsi and PatSpan tasks; partial correlations in upper half also control for task administration order. All correlations significant at .01 level. MoveSpan = movement span; NSRT = nonsign repetition; NSPT = nonsign paired task, Corsi = Corsi block tapping task, PatSpan = pattern span; SLT = sign learning task.

Finally, regression analyses were conducted to (a) test the hypothesis that visuospatial STM accounts for variance in sign learning over and above movement STM and (b) assess which predictor or set of predictors accounted for the greatest variance in the SLT. Note, because the previous two analyses indicated that task order did not substantially affect the results, we chose to disregard it for subsequent analyses.

For the first analysis, in order to more accurately assess the contribution of each construct, composite scores, derived by standardizing and summing construct-relevant scores (e.g., Corsi and PatSpan scores were standardized and summed together to form the visuospatial composite), were used in place of raw scores.Footnote 2 The movement STM composite (MoveScore) was predictive of SLT, F (1, 101) = 54.054, p < .001, accounting for 34.9% of the variance in SLT performance. Adding the visuospatial composite (VisuoScore) to the model significantly increased R 2, F (2, 100) = 7.177, p = .009, accounting for an additional 4.4% of the variance (see Table 4).

Table 4. Hierarchical regression analysis with SLT as the outcome variable

Note: ß=standardized coefficient; pr = partial correlation; SLT = sign learning task; MoveScore = composite score formed by standardizing and summing movement-based scores (viz., movement span, nonsign repetition, and nonsign paired task); VisuoScore = composite score formed by standardizing and summing scores on visuospatial tasks (viz., pattern span and Corsi block tapping task).

Next, because we did not have specific predictions about which task or set of tasks would best predict sign learning, a forward stepwise regression analysis using the Aikake information criterion was conducted with SLT performance as the outcome variable. The best fitting model utilized the NSPT, PatSpan, and MoveSpan as predictors, F (3, 99) = 21.728, p < .001, and accounted for 39.7% of the variance in SLT performance. All predictors were significant (see Table 5).

Table 5. Forward stepwise regression analysis with SLT as the outcome variable

Note: ß=standardized coefficient; pr = partial correlation; AIC = Aikake information criterion. NSPT = nonsign paired task; PatSpan = pattern span; MoveSpan = movement span.

DISCUSSION

Perceptual-motor and phonological processing in sign learning

The primary objective of this study was to identify predictors of sign learning. In order to do so, we worked under the theory that, in addition to general mnemonic and attentional processes, the observed relationship between phonological STM and L2 lexical learning is due to similarities in perceptual-motor, not phonological, processing. Based on this theory and the fact that sign languages are visuospatial-manual, we identified predictors that varied along a number of dimensions but that we believed could nonetheless be classified as movement STM and visuospatial STM—constructs we reasoned were relevant to sign learning. In line with our predictions, predictors could be categorized as measures of the aforementioned constructs, and all predictors positively correlated with the SLT, indicating that movement STM and visuospatial STM are related to sign learning.

What then is the role of phonological processing in L2 sign learning by hearing nonsigners? We interpret our results as suggesting that phonological processing played little if any role in the relationships observed. To review, bivariate and partial correlation analyses revealed that all tasks classified as movement STM shared a large proportion of variance; however, this group of predictors included two tasks that used pseudosigns and can nominally be classified as phonological STM measures (the NSRT and NSPT) along with one nonlinguistic measure of STM for movement (the MoveSpan). Visuospatial STM was assessed with tasks using stimuli that bore no resemblance to signs, and these tasks were also significantly related to all other variables.

There are, of course, a number of potential counterpoints. Here we address three. First, one can look at the results of our regression analyses and note that the movement STM task that elicited the most attention to the phonological features of sign languages, the NSPT, was the best predictor of sign learning performance, suggesting that the phonological component assessed by this task was critical. This may be the case; however, it is important to note that beyond assessing memory for signs, the NSPT task was the only predictor with a clear learning component: all participants watched an instructional video explaining the judgment criteria. Thus, the strong relationship between NSPT and SLT performance may be partially explained by a shared learning, or long-term memory, component. In support of this view, note that the other task that used pseudosigns, the NSRT, did not correlate as highly with sign learning as either of the other two movement STM tasks.

A second counterpoint is that a nonsigner performed the SLT and NSPT pseudosigns. Beginning sign learners are more variable in production (Hilger, Loucks, Quinto-Pozos, & Dye, Reference Hilger, Loucks, Quinto-Pozos and Dye2015), produce larger signs than native signers (Mirus, Rathmann, & Meier, Reference Mirus, Rathmann, Meier, Diveley, Metzger, Taub and Baer2001), and take longer to sign (Cull, Reference Cull2014). These differences may have affected the rhythmic-temporal patterns that characterize all languages (Petitto, Reference Petitto and McGilvray2005; Petitto et al., Reference Petitto, Berens, Kovelman, Dubins, Jasinska and Shalinsky2012, Reference Petitto, Langdon, Stone, Andriola, Kartheiser and Cochran2016), and that may have triggered phonological processing. Pseudosigns in the NSRT, however, were performed by a deaf native signer, and as discussed above, performance on this task shared a significant proportion of variance with the other two movement STM tasks and with sign learning. This suggests that in the present study, the effect of having a nonsigner perform pseudosigns was negligible or, more generally, that to the nonsigner, signs are processed in a nonlinguistic fashion.

Finally, a third counterpoint is that we did not provide discriminant evidence: our case would be stronger had we shown that phonological STM assessed via an auditory-verbal measure was more weakly correlated with sign learning than movement or visuospatial STM tasks. Recall, however, that at least one study has found that in hearing nonsigners, digit span did not correlate with ASL vocabulary growth (Williams et al., Reference Williams, Darcy and Newman2016a). Still, these counterpoints warrant further investigation.

If our conclusions are substantiated, then they raise questions about the nature of the phonological loop and its relationship to lexical learning. As currently conceptualized, the phonological loop is a STM system that is distinct from long-term memory and, because it is specialized for abstract phonological representations, is critical for lexical learning in any language, signed or spoken (Baddeley, Reference Baddeley2012; Baddeley et al., Reference Baddeley, Gathercole and Papagno1998). Our results, in conjunction with prior research (e.g., Newman-Norlund et al., Reference Newman-Norlund, Frey, Petitto and Grafton2006; Williams et al., Reference Williams, Darcy and Newman2016a, Reference Williams, Darcy and Newman2016b), however, raise the possibility either that the phonological loop does not come online until after experience with a particular language modality or that it does not deal with phonological information per se but with perceptual-motor events (see Jones, Hughes, & Macken, Reference Jones, Hughes and Macken2006; Jones et al., Reference Jones, Hughes and Macken2007; Wilson, Reference Wilson2001).

The role of visuospatial STM in sign learning

A secondary objective was to identify which task or set of tasks would best predict sign learning. A priori, we hypothesized that visuospatial STM would account for variance in sign learning over and above movement STM; however, we made no predictions about specific tasks. To test our hypothesis, we created composite scores by standardizing and summing relevant predictors and then submitted these composites to a hierarchical regression analysis with the SLT as the outcome variable. Both composite scores were significant predictors, indicating that movement and visuospatial STM account for independent portions of variance in sign learning performance. Next, we performed a forward stepwise regression analysis to identify the best predictors. This analysis revealed that the NSPT, PatSpan, and MoveSpan accounted for the greatest unique proportions of variance in overall sign learning performance.

It is important to note that in formulating our hypothesis, we took into account two features that make signs, and movements in general, quite different from spoken words: rather than the sequential presentation of sounds, signs are composed of the simultaneous presentation of static and dynamic visuospatial features. Based on the nature of signs and research showing that it is difficult to process visual and motion information in parallel (Ding et al., Reference Ding, Zhao, Wu, Lu, Gao and Shen2015; Vicary et al., Reference Vicary, Robbins, Calvo-Merino and Stevens2014; Vicary & Stevens, Reference Vicary and Stevens2014), we reasoned that it would be difficult for one to encode the disparate features that distinguish one sign from another after only a single exposure but that multiple exposures would allow one to shift attention to features that are encoded by different STM subsystems. In line with this reasoning, we viewed movement STM tasks as faulty measures of their component processes and so hypothesized that including more direct assessments of visuospatial STM in a battery of tests would improve measurement accuracy.

This line of reasoning appears to have been supported: visuospatial STM did account for variance in sign learning over and above movement STM. We were surprised, however, by the magnitude of the relationship between visuospatial STM tasks, particularly the PatSpan, and sign learning, as we expected that movement STM tasks would generally be the strongest predictors.

We suspect that the relative equality between visuospatial and movement STM was due in part to a strategy that utilized memory for key configurations to aid in the recall of movements. To illustrate, consider the pseudosign depicted in Figure 1. A large amount of information can be gleaned by simply referring to the two static images; all that is left to know or guess is the intervening motion. In a similar fashion, memory for key configurations is likely used to redintegrate entire movement patterns. Note, this example implies that one can correctly recall a movement by encoding two configurations; however, it is likely that as the complexity of a pseudosign, and human body movements in general, increases, so too does the load on visuospatial memory (Vicary et al., Reference Vicary, Robbins, Calvo-Merino and Stevens2014).

Whether the magnitude of this relationship generalizes should be investigated, as others have reported a dissociation between visuospatial and movement STM, evident, for example, by a lack of interference in dual-task paradigms (Smyth, Pearson, & Pendleton, Reference Smyth, Pearson and Pendleton1988; Smyth & Pendleton, Reference Smyth and Pendleton1989, Reference Smyth and Pendleton1990) and insignificant correlations between movement and visuospatial STM tasks (Wu & Coulson, Reference Wu and Coulson2014). On the one hand, it may be the case that some idiosyncrasy in our sample, tasks, or methods resulted in the observed relationships. On the other hand, individuals asked to recall signs and other body movements might generally rely on visuospatial memory to aid recall. In support of the latter view, participants in our study were not instructed on whether they should rehearse any of the movements, and few spontaneously chose to do so. Moreover, research on the perception and production of signs by adult signers and nonsigners typically finds that the movement parameter is the most error prone, followed by handshape and orientation, and finally location (Bochner et al., Reference Bochner, Christie, Hauser and Searls2011, Reference Bochner, Samar, Hauser, Garrison, Searls and Sanders2016; Mann et al., Reference Mann, Marshall, Mason and Morgan2010; Ortega & Morgan, Reference Ortega and Morgan2015; Williams & Newman, Reference Williams and Newman2016b). Handshape, orientation, and location are features that can be represented in static visual imagery.

Concerning the tasks identified as “best” predicting sign learning, the result of the forward stepwise regression analysis indicates that sign learning is best assessed by a battery of movement and visuospatial STM tasks. This further supports our view that movement STM tasks, which require visuospatial and motion processing, are faulty measures of their component processes. Consequently, researchers interested in sign learning may wish to include assessments of movement and visuospatial STM in their battery of tasks.

Conclusion and future directions

The results of this study suggest that sign learning is strongly dependent on movement and visuospatial STM and that both make unique contributions to its prediction. Although this study was not explicitly intended to test the role that phonological processing plays in the relationship between phonological STM and lexical learning, it does raise questions about its involvement. In this way, we have illustrated how investigating L2 sign language acquisition can inform theories about memory and language learning.

In order to substantiate the conclusions drawn here, future studies should continue to investigate the possibility that phonological processing is an important component of sign learning in beginning signers, for example, by including both spoken and signed measures of phonological STM as well as other language and perceptual-motor tasks. It is also important to investigate the ecological validity of our findings: are movement STM and visuospatial STM equally as important to learning real signs performed by native signers as they are to predicting pseudosigns performed by nonsigners? Are they predictive of learning in a classroom as well as in a lab? Finally, we chose to focus exclusively on STM measures; however, there are certainly other important factors to investigate.

APPENDIX A MOVEMENT SPAN (MOVESPAN) SCORER DIRECTIONS

Scoring

Each movement will be scored individually and can be awarded 0, 0.5, or 1 point.

  • Full point: all parameters of the movement were reproduced

  • Half point: the movement differed by one parameter from the target OR the movement was reproduced correctly but not mirrored.

  • Zero: the movement differed by more than one parameter

Comments

All movements that are scored less than 1 must have comments. Use the following along with your own comments.

  • O = omission = item was not performed

  • I = intrusion = an item not part of the current set (or even task) was performed (use the other comments section)

  • S = substitution = an incorrect movement, handshape, location, or orientation was used in an item

  • A = addition = an extra movement was added to an item

  • D = deletion = a movement was deleted from an item

Parameters

Handshape

  • There are three handshapes in this task: spread hand (ASL 5), flat hand (ASL flat B), and fist (ASL A).

  • Do not deduct points for small deviations—for example, if the handshape was supposed to be a flat hand but there is a little spread, judge it on whether it looks more like a spread hand or more like a flat hand. Similarly, do not deduct for slight extensions of the thumb or pinky or any other digit.

Orientation

  • Any deviation of about 75 degrees or greater is considered incorrect.

Location

  • Movements done along or referencing a specific part of the body (not including fingers, see below) should be judged correct if they were in the general area (think of easily nameable areas). For example, if the right hand should be touching the knuckles of the open left hand and the participant is instead touching the middle portion of the back of the hand, that is okay. In contrast, touching the tips of the fingers or touching closer to the wrist is incorrect.

  • Movements pointing to or between specific fingers must be reproduced to those specific locations. If hand orientation is reversed, the location could be correct even if pointing to an incorrect finger. For example, if the model showed their palm and pointed to the ring finger but the subject showed the back of their hand and pointed to the middle finger, then, assuming all other aspects were correct, this item would be given half a point.

  • Movements done around the body should be judged using the NSRT criteria for location: use general zones such as near the head, near the body, in front of the head, in front of the body, and so on.

Path movement—movement beginning at the shoulder joint

  • Path movement will be considered incorrect if path movement was added, deleted, performed in the wrong direction, or used a completely different movement.

  • Regarding repetitions: do not count! Simply distinguish between “once” and “more than once,” meaning if a movement has repetitions but the subject only does the movement once, then this is incorrect; or if the movement has no repetition but the subject adds one, this is also incorrect.

  • Regarding length of path: only consider the length of the movement when it extends from within the persons frame to outside of it and vice versa.

  • Regarding trajectory: Any deviation of ~30 degrees or more is considered incorrect

Internal movement—movement of the wrist or fingers

  • The only internal movement in this task is wrist rotation; judge this parameter incorrect if the orientation of the hand is off by about 75 degrees or more.

  • This parameter should be considered wrong if any other internal movement is added.

APPENDIX B NONSIGN REPETITION TASK (NSRT) SCORING DIRECTIONS

Scoring

Nonsigns will be scored dichotomously (0 or 1) on the following: handshape, path movement, and internal movement. Location errors will be noted but will not be used in calculating scores.

Besides scoring, you will also provide comments for each column.

Handshape

  • An incorrect handshape may add or delete finger/thumb or use a different handshape (5 instead of B or V instead of K)

  • A small deviation in handshape is allowed. For example, if the pinky sticks out a bit while doing a B handshape.

  • Deviations in handshape will be considered incorrect when a finger/thumb is in a different position or configuration.

  • Orientation: any deviation of 75 degrees or greater will be considered an error in handshape.

Path movement

  • Path movement will be considered incorrect if path movement was added, deleted, performed in the wrong direction, or used a completely different movement.

  • Regarding repetitions: do not count! Simply distinguish between “once” and “more than once,” meaning if a movement has repetitions but the subject only does the movement once, then this is a 0; or if the movement has no repetition but the subject adds one, this is also a 0.

  • Regarding length of path: Only consider the length of the movement when it extends from within the persons frame to outside of it and vice versa. (Remember [lead author's] example.)

  • Regarding trajectory: any deviation of ~30 degrees or more is considered incorrect

  • Regarding size of the movement if a shape is outlined: two sizes, large and small (e.g., a circular path movement that forms a circle).

Internal movement

Internal movement will be considered incorrect if aperture or trill was added or deleted, wrist rotation, deviation, nodding [extension/flexion] was added or deleted, if a second handshape's orientation is off by 75 degrees or greater, or if the second handshape is otherwise incorrect.

Location

  • Location will be considered incorrect . . .

    • If a movement or handshape is incorrectly occluded or exposed

    • For unidirectional movements:

      • If the location of the gesture begins or ends in the wrong general area

        • Around the head/around the body, in front/to the side of the person, etc.

        • Again, judge distance grossly

    • For alternating movements or movements with repetition:

      • The movement should be contained within the same general space, but do NOT count off if the individual begins and ends the movement in the opposite place (e.g., if the movement alternated between right and left but the participant began the movement as left-right, that is fine).

      • Since we are not counting how many times a movement was performed, location may be off here as well and that is okay, so long as the movement required repetition and the participant did more than 1 cycle.

Comments

  • Comments must be made anytime

    • A “0” is given for any parameter

    • A difficult decision was made

    • The wrong hand was used

      • If the wrong hand is used and this affects the movement or location, make sure to score and comment accordingly.

      • If the wrong hand was used but movement or location was unaffected, then simply note it here.

    • Use the following codes:

      • O = omission = item was not performed

      • I = intrusion = an item not part of the current set (or even task) was performed (use the other comments section)

      • S = substitution = an incorrect movement, handshape, location, or orientation was used

      • A = addition = an extra movement was added

      • D = deletion = a movement was deleted

ACKNOWLEDGMENTS

This research was supported by the National Science Foundation Science of Learning Center Grant on Visual Language and Visual Learning (VL2) at Gallaudet University, under cooperative agreement number SBE-0541953, and by a Goizueta Foundation Fellowship awarded to the first author. We thank Maya Berinhout, Rachel Monahan, Natalie Pittman, and Angelique Soulakos for their help throughout the project. We also thank Dr. Randy Engle and Dr. Daniel Spieler for helpful suggestions.

Footnotes

1. To maintain consistency with naming conventions in the literature, we refer to tasks using pseudoword and pseudosign stimuli as nonword and nonsign tasks, respectively.

2. Note, the SLT score is itself a composite score formed by summing across two different sets of pseudosign–word pairs.

References

REFERENCES

Atkins, P. W. B., & Baddeley, A. D. (1998). Working memory and distributed vocabulary learning. Applied Psycholinguistics, 19, 537552. doi:10.1017/S0142716400010353Google Scholar
Baddeley, A. D. (1966). Short-term memory for word sequences as a function of acoustic, semantic and formal similarity. Quarterly Journal of Experimental Psychology, 18, 362365. doi:10.1080/14640746608400055Google Scholar
Baddeley, A. D. (1986). Working memory. New York: Clarendon Press.Google Scholar
Baddeley, A. D. (2012). Working memory: Theories, models, and controversies. Annual Review of Psychology, 63, 129. doi:10.1146/annurev-psych-120710-100422Google Scholar
Baddeley, A. D. (2015). Working memory in second language learning. In Wen, Z., Mota, M. B., McNeill, A., Bunting, M., & Engle, R. (Eds.), Working memory in second language acquisition and processing (pp. 1728). Bristol: Multilingual Matters.Google Scholar
Baddeley, A. D., Gathercole, S., & Papagno, C. (1998). The phonological loop as a language learning device. Psychological Review, 105, 158173. doi:10.1037/0033-295X.105.1.158Google Scholar
Baddeley, A. D., Thomson, N., & Buchanan, M. (1975). Word length and the structure of short-term memory. Journal of Verbal Learning and Verbal Behavior, 14, 575589. doi:10.1016/S0022-5371(75)80045-4Google Scholar
Bates, E., & Goodman, J. C. (1997). On the inseparability of grammar and the lexicon: Evidence from acquisition, aphasia, and real-time processing. Language and Cognitive Processes, 12, 507584. doi:10.1080/016909697386628Google Scholar
Bavelier, D., Newman, A. J., Mukherjee, M., Hauser, P., Kemeny, S., Braun, A., & Boutla, M. (2008). Encoding, rehearsal, and recall in signers and speakers: Shared network but differential engagement. Cerebral Cortex, 18, 22632274. doi:10.1093/cercor/bhm248Google Scholar
Bochner, J. H., Christie, K., Hauser, P. C., & Searls, J. M. (2011). When is a difference really different? Learners' discrimination of linguistic contrasts in American Sign Language. Language Learning, 61, 13021327.Google Scholar
Bochner, J. H., Samar, V. J., Hauser, P. C., Garrison, W. M., Searls, J. M., & Sanders, C. A. (2016). Validity of the American Sign Language Discrimination Test. Language Testing, 33, 473495.Google Scholar
Brentari, D. (1998). A prosodic model of sign language phonology. Cambridge, MA: MIT Press.Google Scholar
Brysbaert, M., & New, B. (2009). Moving beyond Kučera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. Behavior Research Methods, 41, 977990.Google Scholar
Campbell, R., MacSweeney, M., & Waters, D. (2008). Sign language and the brain: A review. Journal of Deaf Studies and Deaf Education, 13, 320. doi:10.1093/deafed/enm035Google Scholar
Colle, H. A., & Welsh, A. (1976). Acoustic masking in primary memory. Journal of Verbal Learning and Verbal Behavior, 15, 1731. doi:10.1016/S0022-5371(76)90003-7Google Scholar
Conrad, R., & Hull, A. J. (1964). Information, acoustic confusion and memory span. British Journal of Psychology, 55, 429432. doi:10.1111/j.2044-8295.1964.tb00928.xGoogle Scholar
Conway, A. R., Kane, M. J., Bunting, M. F., Hambrick, D. Z., Wilhelm, O., & Engle, R. W. (2005). Working memory span tasks: A methodological review and user's guide. Psychonomic Bulletin & Review, 12, 769786.Google Scholar
Cooper, C. J. (1964). Some relationships between paired-associates learning and foreign-language aptitude. Journal of Educational Psychology, 55, 132138. doi:10.1037/h0044223Google Scholar
Cull, A. (2014). Production of movement in users of American Sign Language and its influence on being identified as non-native (Doctoral dossertation, Gallaudet University).Google Scholar
Della Sala, S., Gray, C., Baddeley, A. D., Allamano, N., & Wilson, L. (1999). Pattern span: A tool for unwelding visuo-spatial memory. Neuropsychologia, 37, 11891199. doi:10.1016/S0028-3932(98)00159-6Google Scholar
Ding, X., Zhao, Y., Wu, F., Lu, X., Gao, Z., & Shen, M. (2015). Binding biological motion and visual features in working memory. Journal of Experimental Psychology: Human Perception and Performance, 41, 850865. doi:10.1037/xhp0000061Google Scholar
Engle, R. W., Tuholski, S. W., Laughlin, J. E., & Conway, A. R. A. (1999). Working memory, short-term memory, and general fluid intelligence: A latent-variable approach. Journal of Experimental Psychology: General, 128, 309331. doi:10.1037/0096-3445.128.3.309Google Scholar
Field, A. (2013). Discovering statistics using IBM SPSS statistics. Thousand Oaks, CA: Sage.Google Scholar
Gathercole, S. E. (1995). Is nonword repetition a test of phonological memory or long-term knowledge? It all depends on the nonwords. Memory & Cognition, 23, 8394. doi:10.3758/bf03210559Google Scholar
Gathercole, S. E. (2006). Nonword repetition and word learning: The nature of the relationship. Applied Psycholinguistics, 27, 513543. doi:10.1017/S0142716406060383Google Scholar
Gathercole, S. E., & Baddeley, A. D. (1989). Evaluation of the role of phonological STM in the development of vocabulary in children: A longitudinal study. Journal of Memory and Language, 28, 200213. doi:10.1016/0749-596X(89)90044-2Google Scholar
Gathercole, S. E., Hitch, G. J., Service, E., & Martin, A. J. (1997). Phonological short-term memory and new word learning in children. Developmental Psychology, 33, 966979. doi:10.1037/0012-1649.33.6.966Google Scholar
Gathercole, S. E., Pickering, S. J., Hall, M., & Peaker, S. M. (2001). Dissociable lexical and phonological influences on serial recognition and serial recall. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 54A, 130. doi:10.1080/02724980042000002Google Scholar
Gathercole, S. E., Willis, C. S., Baddeley, A. D., & Emslie, H. (1994). The Children's Test of Nonword Repetition: A test of phonological working memory. Memory, 2, 103127. doi:10.1080/09658219408258940Google Scholar
Goldberg, D., Looney, D., & Lusin, N. (2015). Enrollments in languages other than English in United States institutions of higher education, Fall 2013. Modern Language Association. Retrieved June 15, 2016, from https://www.mla.org/content/download/31180/1452509/EMB_enrllmnts_nonEngl_2013.pdfGoogle Scholar
Gupta, P. (2003). Examining the relationship between word learning, nonword repetition, and immediate serial recall in adults. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 56A, 12131236. doi:10.1080/02724980343000071Google Scholar
Hilger, A. I., Loucks, T. M., Quinto-Pozos, D., & Dye, M. W. (2015). Second language acquisition across modalities: Production variability in adult L2 learners of American Sign Language. Second Language Research, 31, 375388.Google Scholar
Hulme, C., Maughan, S., & Brown, G. D. A. (1991). Memory for familiar and unfamiliar words: Evidence for a long-term memory contribution to short-term memory span. Journal of Memory and Language, 30, 685701.Google Scholar
Hummel, K. M., & French, L. M. (2016). Phonological memory and aptitude components: Contributions to second language proficiency. Learning and Individual Differences, 51 (Supplement C), 249255. doi:10.1016/j.lindif.2016.08.016Google Scholar
Hundal, P. S., & Horn, J. L. (1977). On the relationships between short-term learning and fluid and crystallized intelligence. Applied Psychological Measurement, 1, 1121. doi:10.1177/014662167700100104Google Scholar
Jones, D. M., Hughes, R. W., & Macken, W. J. (2006). Perceptual organization masquerading as phonological storage: Further support for a perceptual-gestural view of short-term memory. Journal of Memory and Language, 54, 265281. doi:10.1016/j.jml.2005.10.006Google Scholar
Jones, D. M., Hughes, R. W., & Macken, W. J. (2007). The phonological store abandoned. Quarterly Journal of Experimental Psychology, 60, 505511. doi:10.1080/17470210601147598Google Scholar
Jones, D. M., & Macken, W. J. (1993). Irrelevant tones produce an irrelevant speech effect: Implications for phonological coding in working memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19, 369381. doi:10.1037/0278-7393.19.2.369Google Scholar
Klima, E. S., & Bellugi, U. (1979). The signs of language. Cambridge, MA: Harvard University Press.Google Scholar
Krug, K., Shafer, T., Dardick, W., Magalis, C., & Parenté, R. (2002). A test of foreign language acquisition: Paired-association learning. Applied Cognitive Psychology, 16, 211221. doi:10.1002/acp.781Google Scholar
Kyllonen, P. C., & Tirre, W. C. (1988). Individual differences in associative learning and forgetting. Intelligence, 12, 393421. doi:10.1016/0160-2896(88)90004-9Google Scholar
Kyllonen, P. C., & Woltz, D. J. (1989). Role of cognitive factors in the acquisition of cognitive skill. In Kanfer, R., Cudeck, R., Kanfer, R., & Cudeck, R. (Eds.), Abilities, motivation, and methodology: The Minnesota Symposium on Learning and Individual Differences (pp. 239280). Hillsdale, NJ: Erlbaum.Google Scholar
Li, S. (2015). The construct validity of language aptitude. Studies in Second Language Acquisition, 38, 801842. doi:10.1017/S027226311500042XGoogle Scholar
Mann, W., Marshall, C. R., Mason, K., & Morgan, G. (2010). The acquisition of sign language: The impact of phonetic complexity on phonology. Language Learning and Development, 6, 6086. doi:10.1080/15475440903245951Google Scholar
Marshalek, B., Lohman, D. F., & Snow, R. E. (1983). The complexity continuum in the radex and hierarchical models of intelligence. Intelligence, 7, 107127. doi:10.1016/0160-2896(83)90023-5Google Scholar
Martin, K. I., & Ellis, N. C. (2012). The roles of phonological short-term memory and working memory in L2 grammar and vocabulary learning. Studies in Second Language Acquisition, 34, 379413. doi:10.1017/S0272263112000125Google Scholar
Masoura, E. V., & Gathercole, S. E. (1999). Phonological short-term memory and foreign language learning. International Journal of Psychology, 34, 383388. doi:10.1080/002075999399738Google Scholar
Masoura, E. V., & Gathercole, S. E. (2005). Contrasting contributions of phonological short-term memory and long-term knowledge to vocabulary learning in a foreign language. Memory, 13, 422429. doi:10.1080/09658210344000323Google Scholar
Masoura, E. V., Gathercole, S. E., & Bablekou, Z. (2004). Contributions of phonological short-term memory to vocabulary acquisition. Psychology, 11, 341355.Google Scholar
Milner, B. (1971). Interhemispheric differences in the localization of psychological processes in man. British Medical Bulletin, 27, 272277.Google Scholar
Mirus, G., Rathmann, C., & Meier, R. P. (2001). Proximalization and distalization of sign movement in adult learners. In Diveley, V., Metzger, M., Taub, S., & Baer, A. M. (Eds.), Signed languages: Discoveries from international research (pp. 103119). Washington, DC: Gallaudet University Press.Google Scholar
Moulton, S. T., & Kosslyn, S. M. (2009). Imagining predictions: Mental imagery as mental emulation. Philosophical Transactions of the Royal Society B: Biological Sciences, 364, 12731280. doi:10.1098/rstb.2008.0314Google Scholar
Neath, I. (2000). Modeling the effects of irrelevant speech on memory. Psychonomic Bulletin & Review, 7, 403423. doi:10.3758/BF03214356Google Scholar
Newman-Norlund, R. D., Frey, S. H., Petitto, L.-A., & Grafton, S. T. (2006). Anatomical substrates of visual and auditory miniature second-language learning. Journal of Cognitive Neuroscience, 18, 19841997.Google Scholar
O'Brien, I., Segalowitz, N., Collentine, J., & Freed, B. (2006). Phonological memory and lexical, narrative, and grammatical skills in second language oral production by adult learners. Applied Psycholinguistics, 27, 377402. doi:10.1017/S0142716406060322Google Scholar
O'Brien, I., Segalowitz, N., Freed, B., & Collentine, J. (2007). Phonological memory predicts second language oral fluency gains in adults. Studies in Second Language Acquisition, 29, 557582. doi:10.1017/S027226310707043XGoogle Scholar
Orfanidou, E., Adam, R., McQueen, J. M., & Morgan, G. (2009). Making sense of nonsense in British Sign Language (BSL): The contribution of different phonological parameters to sign recognition. Memory & Cognition, 37, 302315. doi:10.3758/MC.37.3.302Google Scholar
Ortega, G., & Morgan, G. (2015). Phonological development in hearing learners of a sign language: The influence of phonological parameters, sign complexity, and iconicity. Language Learning, 65, 660688. doi:10.1111/lang.12123Google Scholar
Pa, J., Wilson, S. M., Pickell, H., Bellugi, U., & Hickok, G. (2008). Neural organization of linguistic short-term memory is sensory modality-dependent: Evidence from signed and spoken language. Journal of Cognitive Neuroscience, 20, 21982210. doi:10.1162/jocn.2008.20154Google Scholar
Papagno, C., Valentine, T., & Baddeley, A. D. (1991). Phonological short-term memory and foreign-language vocabulary learning. Journal of Memory and Language, 30, 331347. doi:10.1016/0749-596X(91)90040-QGoogle Scholar
Papagno, C., & Vallar, G. (1992). Phonological short-term memory and the learning of novel words: The effect of phonological similarity and item length. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 44A, 4767. doi:10.1080/14640749208401283Google Scholar
Peelen, M. V., & Downing, P. E. (2007). The neural basis of visual body perception. Nature Reviews Neuroscience, 8, 636648. doi:10.1038/nrn2195Google Scholar
Peirce, J. W. (2007). PsychoPy—Psychophysics software in Python. Journal of Neuroscience Methods, 162, 813.Google Scholar
Petitto, L. A. (2005). How the brain begets language. In McGilvray, J. A. (Ed.), The Cambridge companion to Chomsky (pp. 84101). Cambridge: Cambridge University Press.Google Scholar
Petitto, L. A., Berens, M. S., Kovelman, I., Dubins, M. H., Jasinska, K., & Shalinsky, M. (2012). The “Perceptual Wedge Hypothesis” as the basis for bilingual babies' phonetic processing advantage: New insights from fNIRS brain imaging. Brain and Language, 121, 130. doi:10.1016/j.bandl.2011.05.003Google Scholar
Petitto, L. A., Langdon, C., Stone, A., Andriola, D., Kartheiser, G., & Cochran, C. (2016). Visual sign phonology: Insights into human reading and language from a natural soundless phonology. Wiley Interdisciplinary Reviews: Cognitive Science. Advance online publication. doi:10.1002/wcs.1404Google Scholar
Pichler, D. C., & Koulidobrova, H. (2016). Acquisition of sign language as a second language. In Marschark, M., Spencer, P. E., Marschark, M., & Spencer, P. E. (Eds.), The Oxford handbook of deaf studies in language (pp. 218230). New York: Oxford University Press.Google Scholar
Porro, C. A., Francescato, M. P., Cettolo, V., Diamond, M. E., Baraldi, P., Zuiani, C., . . . di Prampero, P. E. (1996). Primary motor and sensory cortex activation during motor performance and motor imagery: A functional magnetic resonance imaging study. Journal of Neuroscience, 16, 76887698.Google Scholar
Rönnberg, J., Rudner, M., & Ingvar, M. (2004). Neural correlates of working memory for sign language. Cognitive Brain Research, 20, 165182. doi:10.1016/j.cogbrainres.2004.03.002Google Scholar
Rosen, R. S. (2004). Beginning L2 production errors in ASL lexical phonology: A cognitive phonology model. Sign Language and Linguistics, 7, 3161.Google Scholar
Rudner, M. (2015). Working memory for meaningless manual gestures. Canadian Journal of Experimental Psychology, 69, 7279. doi:10.1037/cep0000033Google Scholar
Rudner, M., Andin, J., & Rönnberg, J. (2009). Working memory, deafness and sign language. Scandinavian Journal of Psychology, 50, 495505. doi:10.1111/j.1467-9450.2009.00744.xGoogle Scholar
Salamé, P., & Baddeley, A. D. (1989). Effects of background music on phonological short-term memory. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 41, 107122. doi:10.1080/14640748908402355Google Scholar
Seemüller, A., Fiehler, K., & Rösler, F. (2011). Unimodal and crossmodal working memory representations of visual and kinesthetic movement trajectories. Acta Psychologica, 136, 5259. doi:10.1016/j.actpsy.2010.09.014Google Scholar
Seibert, L. C. (1930). An experiment on the relative efficiency of studying French vocabulary in associated pairs versus studying French vocabulary in context. Journal of Educational Psychology, 21, 297314. doi:10.1037/h0070517Google Scholar
Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86, 420428. doi:10.1037/0033-2909.86.2.420Google Scholar
Smyth, M. M., Pearson, N. A., & Pendleton, L. R. (1988). Movement and working memory: Patterns and positions in space. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 40, 497514. doi:10.1080/02724988843000041Google Scholar
Smyth, M. M., & Pendleton, L. R. (1989). Working memory for movements. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 41, 235250.Google Scholar
Smyth, M. M., & Pendleton, L. R. (1990). Space and movement in working memory. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 42, 291304. doi:10.1080/14640749008401223Google Scholar
Stankov, L., Seizova-Cajić, T., & Roberts, R. D. (2001). Tactile and kinesthetic perceptual processes within the taxonomy of human cognitive abilities. Intelligence, 29, 129. doi:10.1016/S0160-2896(00)00038-6Google Scholar
Thorn, A. S. C., & Frankish, C. R. (2005). Long-term knowledge effects on serial recall of nonwords are not exclusively lexical. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 729735. doi:10.1037/0278-7393.31.4.729Google Scholar
Thorndike, E. L. (1908). Memory for paired associates. Psychological Review, 15, 122138. doi: 10.1037/h0073570Google Scholar
Urgolites, Z. J., & Wood, J. N. (2013a). Binding actions and scenes in visual long-term memory. Psychonomic Bulletin and Review, 20, 12461252. doi:10.3758/s13423-013-0440-1Google Scholar
Urgolites, Z. J., & Wood, J. N. (2013b). Visual long-term memory stores high-fidelity representations of observed actions. Psychological Science, 24, 403411. doi:10.1177/0956797612457375Google Scholar
Vallar, G. (2006). Memory systems: The case of phonological short-term memory. A festschrift for Cognitive Neuropsychology. Cognitive Neuropsychology, 23, 135155. doi:10.1080/02643290542000012Google Scholar
Vicary, S. A., Robbins, R. A., Calvo-Merino, B., & Stevens, C. J. (2014). Recognition of dance-like actions: Memory for static posture or dynamic movement? Memory & Cognition, 42, 755767. doi:10.3758/s13421-014-0395-0Google Scholar
Vicary, S. A., & Stevens, C. J. (2014). Posture-based processing in visual short-term memory for actions. Quarterly Journal of Experimental Psychology, 67, 24092424. doi:10.1080/17470218. 2014.931445Google Scholar
Williams, J. T. (2017). Modality adaptation hypothesis: Neurocognitive alterations to novel visuospatial components of sign language during initial acquisition in adulthood (Unpublished doctoral dissertation, Indiana University, Bloomington).Google Scholar
Williams, J. T., Darcy, I., & Newman, S. D. (2015). Modality-independent neural mechanisms for novel phonetic processing. Brain Research, 1620, 107115. doi:10.1016/j.brainres.2015.05.014Google Scholar
Williams, J. T., Darcy, I., & Newman, S. D. (2016a). The beneficial role of L1 spoken language skills on initial L2 sign language learning: Cognitive and linguistic predictors of M2L2 acquisition. Studies in Second Language Acquisition Advance online publication. doi:10.1017/S0272263116000322Google Scholar
Williams, J. T., Darcy, I., & Newman, S. D. (2016b). Modality-specific processing precedes amodal linguistic processing during L2 sign language acquisition: A longitudinal study. Cortex, 75, 5667. doi:10.1016/j.cortex.2015.11.015Google Scholar
Williams, J. T., & Newman, S. D. (2016a). Modality-independent effects of phonological neighborhood structure on initial L2 sign language learning. Research in Language, 13, 199213.Google Scholar
Williams, J. T., & Newman, S. D. (2016b). Phonological substitution errors in L2 ASL sentence processing by hearing M2L2 learners. Second Language Research. Advance online publication. doi:10.1177/0267658315626211Google Scholar
Wilson, M. (2001). The case for sensorimotor coding in working memory. Psychonomic Bulletin & Review, 8, 4457. doi:10.3758/BF03196138Google Scholar
Wilson, M., & Emmorey, K. (1997). A visuospatial “phonological loop” in working memory: Evidence from American Sign Language. Memory & Cognition, 25, 313320. doi:10.3758/BF03211287Google Scholar
Wilson, M., & Emmorey, K. (1998). A “word length effect” for sign language: Further evidence for the role of language in structuring working memory. Memory & Cognition, 26, 584590. doi:10.3758/BF03201164Google Scholar
Wilson, M., & Emmorey, K. (2003). The effect of irrelevant visual input on working memory for sign language. Journal of Deaf Studies and Deaf Education, 8, 97103. doi:10.1093/deafed/eng010Google Scholar
Wilson, M., & Fox, G. (2007). Working memory for language is not special: Evidence for an articulatory loop for novel stimuli. Psychonomic Bulletin & Review, 14, 470473.Google Scholar
Wu, Y. C., & Coulson, S. (2014). A psychometric measure of working memory capacity for configured body movement. PLOS ONE, 9.Google Scholar
Zihl, J., & Heywood, C. A. (2015). The contribution of LM to the neuroscience of movement vision. Frontiers in Integrative Neuroscience, 9, 6. doi:10.3389/fnint.2015.00006Google Scholar
Figure 0

Figure 1. Example of a pseudosign depicting the major phonological parameters of handshape, location, movement, and orientation. The sign begins with the right, dominant hand holding a “Y” handshape, oriented with the palm facing the body, and in contact with the chest. Next, the dominant hand arcs away from the body and toward the right while simultaneously rotating the hand so that the palm faces the ground. The pseudosign ends in front of the model, in neutral space.

Figure 1

Figure 2. An example from the nonsign paired task. After seeing the full video of the target and either item 1A in the first block or 1B in the second, the response screen appears: “Were the gestures you just saw the same or different? Click to make your choice.” Pictures display the final position of a pseudosign.

Figure 2

Figure 3. An example of a practice Corsi trial, set size three.

Figure 3

Figure 4. An example of a practice PatSpan trial, set size three. The final frame depicts the response screen, instructing participants to “click on the green button when you are finished.”

Figure 4

Figure 5. Depiction of the sign learning task. (a) A pseudosign–word pair from the study portion of a trial. (b) An item from the test portion: the pseudosign (cue) is presented followed by the response screen showing all words from this set, in alphabetical order.

Figure 5

Table 1. Descriptive statistics for all tasks

Figure 6

Table 2. Bivariate (bottom half) and partial (controlling for order; upper half) correlations

Figure 7

Table 3. Partial correlations controlling for visuospatial STM and order

Figure 8

Table 4. Hierarchical regression analysis with SLT as the outcome variable

Figure 9

Table 5. Forward stepwise regression analysis with SLT as the outcome variable