Skip to main content Accessibility help
×
Hostname: page-component-77f85d65b8-8wtlm Total loading time: 0 Render date: 2026-04-16T14:12:45.972Z Has data issue: false hasContentIssue false

13 - Code-switching between sign languages

from Part III - The structural implications of code-switching

Published online by Cambridge University Press:  05 June 2012

Barbara E. Bullock
Affiliation:
Pennsylvania State University
Almeida Jacqueline Toribio
Affiliation:
Pennsylvania State University

Summary

Information

13 Code-switching between sign languages

David Quinto-Pozos

13.1 Introduction

Code-switching (hereafter CS) can occur when signers of two sign languages interact. This is not surprising since CS is presumably a phenomenon that occurs regardless of the modality in which language is produced and perceived. Even so, the signed language researcher of CS is faced with challenges that may be unique to that modality. In particular, the question of how to attribute various signs or meaningful elements within an utterance (e.g. from Language A, Language B, both languages, or neither language) is among the main concerns. Admittedly, this phenomenon is not unique to signed language CS research, as evidenced by the discussion of congruence in spoken language literature and its role in CS (see Sebba, this volume). However, the potential for similarities between sign languages perhaps makes this issue much more pronounced in CS between sign languages.

Some signed languages are related historically, and this can be noted, in some cases, by examining lexical and grammatical similarities between the languages. However, regardless of the history of any combination of sign languages, there seem to be similar ways that signers use their bodies – not simply their hands – to create meaning across such languages, and this results in the production – within a signed stream – of elements whose meanings are relatively transparent to an interlocutor. Essentially, such characteristically transparent communicative devices exist across sign languages, and in some cases they take on linguistic roles. For example, an extended index finger directed at the signer herself, often at the chest but possibly at the face, usually acts as the first person singular pronoun, and points in the signing space often indicate locative references such as here or there. So-called “classifier” constructions and bodily actions that appear mimetic in nature are also found in the utterances of signers of different sign languages, and those constructions and actions are often difficult to attribute to one particular sign language as opposed to another. If one also considers the articulation, within the sign stream, of common gestures that are used throughout various cultures (e.g. the thumbs-up gesture to indicate that something is good), the degree to which meaning creation is transparent across sign languages – even those that are unrelated – is significant. CS researchers who work on signed language data must carefully consider a broad spectrum of meaningful devices that signers produce because they influence the ways in which CS analyses are performed. Since users of spoken languages can also accompany their speech with points and gestures, it would appear that such productions could also present a challenge for researchers of CS in speech – not only for sign linguists. In essence, the signed modality forces us to consider ways in which linguistic and gestural devices interact, and this could be extremely valuable to CS analyses of spoken or signed languages.

In addition to communicative devices that are somewhat similar across sign languages, it appears that various linguistic structures of signed languages are more similar to each other than is the case for spoken languages, and this holds true even when one considers unrelated sign languages. One could suggest that this is the case at the level of phonology, morphology, and even syntax. As a result, whether or not a particular form can be described as a code-switch could be questionable. This situation may be akin to types of CS that occur between two historically related spoken languages like Spanish and Portuguese, but it is perhaps very different for examples of CS between structurally diverse languages.

There is at least one other major challenge that is faced by the researcher of signed language CS, and it also relates to the primary question of how to determine what language is operating at any one time during the articulation of various elements within the sign stream. This challenge stems from the fact that the articulators that a signer uses – body parts that allow sign languages to express meaning in certain ways – differ from those used in spoken languages. In essence, a signer can use more than one body part (e.g. hands, arms, head, torso) simultaneously to create meaning, and this fact influences how CS is examined in signed language research.

13.2 Notions to consider: differences between sign and speech

Despite similarities in various facets of linguistic structure between signed and spoken languages (e.g. the existence of phonological primitives, various word-formation processes, and syntactic structures), there are some noteworthy differences between languages across the two modalities. Several of these likely stem from having the hands, arms, and other upper body parts as articulators as well as from the use of the immediate area in front of the signer as an important space in which signs are articulated.

The simultaneous nature of signed language has been recognized since the beginning of linguistic research on American Sign Language (ASL) and other sign languages (Fischer Reference Fischer, Rohrer and Ruwet1974; Klima and Bellugi Reference Klima and Bellugi1979). For example, morphemes that communicate person, number, and aspectual information can occur concomitantly with some verbs, and a signer’s two hands can be used to simultaneously articulate two different classifier constructions, referred to hereafter as polycomponential signs, following Schembri (Reference Schembri and Emmorey2003) and Slobin et al. (Reference Slobin, Hoiting, Kuntze, Lindert, Weinberg, Pyers, Anthony, Biederman, Thumann and Emmorey2003). A signer can also produce non-manual signals (e.g. mouth and head movements, torso shifts, and patterns of eyegaze) simultaneously with a lexical sign in order to modify that sign (or phrase). As an example, a non-manual signal such as an adverbial mouth movement can co-occur with verb signs. The mouth and lips can also serve to articulate, without voice, a spoken language word while the signer produces a semantically equivalent sign, and this is commonly referred to as “mouthing.” Even if one considers tonal contrasts in some of the world’s languages and prosodic features that provide meaningful information, spoken languages do not tend toward exhibiting simultaneity to the degree that signed languages do.

One reason for differences between signed and spoken languages may lie in the purported speed of sign production versus spoken word production. Klima and Bellugi (Reference Klima and Bellugi1979) claim that, on average, a spoken word can be uttered in half the time required to articulate a sign.1 Meier (Reference Meier, Meier, Cormier and Quinto-Pozos2002) hypothesizes that the rate of signing versus speaking plays a prominent role in the simultaneous nature of signed languages because it discourages sequential affixation.2 Essentially, an “articulatory constraint may push natural sign languages, such as ASL, in a particular typological direction, that is, toward nonconcatenative morphology” (Meier Reference Meier, Meier, Cormier and Quinto-Pozos2002:8).

Another factor that may play an important role in leading to the simultaneous nature of signed languages has been described in terms of the amount of information that can be communicated simultaneously in one modality versus the other. Meier (Reference Meier, Meier, Cormier and Quinto-Pozos2002:10) suggests that, “at any instant in time more information is available to the eye than the ear, although in both modalities only a fraction of that information is linguistically relevant.” Emmorey (Reference Emmorey2002) is consistent with Meier’s argument in her claim that it is easier to visually perceive spatially disparate information in parallel than to perceive and decipher different types of auditory information simultaneously. In other words, it is easier to perceive complex visual displays at once than auditory signals that may contain disparate types of information.

The use of three-dimensional space in the articulation of sign seems to also lead to some interesting differences between sign and speech. Sign languages allow for the simultaneous communication of various types of information about one or more objects. As mentioned earlier, a signer can articulate a polycomponential sign with one hand and a different one with the other hand, and the two hands interact in specific ways (Supalla Reference Supalla and Craig1986).3 Such articulations, referred to here as entity polycomponential signs, can provide information about motion and/or location of the objects, including to what type or class each item belongs. The kinds of productions that have been labeled polycomponential signs are also used to describe how objects are handled (handle polycomponential signs) as well as how objects can be described in visual–geometric ways (size and shape specifiers) (Emmorey Reference Emmorey2002; Schembri Reference Schembri and Emmorey2003).

In addition to the use of 3-D space by signers when they articulate polycomponential signs, users of all sign languages also have access to the gestural medium for meaning generation. As a result, signers can alternate linguistic signs with non-linguistic gestures, and the signs and gestural material can also co-occur in some cases – such as with deictic pointing and verbs that indicate person and number, referred to commonly as “agreement” verbs or “indicating” verbs. The gestures themselves are sometimes culturally specific emblems that are also produced by members of the hearing community (McNeill Reference McNeill1992). However, the gestures can also be deictic and pantomimic in nature. The latter are particularly intriguing because they tend to be used regularly in sign languages. Signers across different sign languages produce similar mimetic gestures that alternate with linguistic material, which are referred to by some researchers as constructed action. Constructed action has been described for sign as the way in which a signer uses her body to depict aspects of an animate entity (Metzger Reference Metzger and Lucas1995; Aarons and Morgan Reference Aarons and Ruth2003). For example, a signer might “act” like another person or an animal when describing something about that being or something that occurred. Clark and Gerrig (Reference Clark and Gerrig1990) describe a similar phenomenon as an accompaniment to spoken language use, and they develop an argument for why demonstrations, as they call these mimetic actions, function as quotations. The alternation of gestural material such as emblems and constructed action with linguistic/grammatical material might be rule-governed, although such systematic relationships have been addressed only minimally in the literature (e.g. Aarons and Morgan Reference Aarons and Ruth2003). The fact that signers have access to gestural resources within the same channel of communication poses a challenge for the researcher who is analyzing CS data, as will be demonstrated later in this chapter.

With regard to linguistic structure, some authors have suggested that sign language phonologies are more similar to each other than spoken language phonologies when compared cross-linguistically. Lucas and Valli (Reference Lucas and Valli1992) note that signs referencing names of foreign countries have become incorporated into ASL from other signed languages, but the phonologies of the source languages are so similar to the phonology of ASL that it is difficult to determine if the incorporation should be considered a lexical borrowing or an example of CS. Borrowings in spoken language have often been characterized by the phonological integration of the borrowed word into the phonology of the other language, but this integration may not be so evident in signed language. For instance, the sign italy as signed in Italian Sign Language (LIS) has now been incorporated into ASL, in some cases replacing the older sign for Italy. Lucas and Valli suggest that LIS and ASL have similar phonological inventories of handshape, palm orientation, and location, and this is true even though they are not related or mutually intelligible as languages. They also note that the languages may have similar segmental structure. One part of the authors’ rationale for claiming that sign languages have similar phonologies lies in the assertion that such languages have many more basic components (i.e. basic handshapes, movements, places of articulation, etc.) than the sets of inventories that spoken languages contain. They cite as evidence the suggestion of a colleague (Robert Johnson) that:

. . . pure minimal pairs of the kind used to demonstrate contrast in spoken languages are hard to find in ASL and that this may be so because there are so many more basic components from which to build contrastive units – so many handshapes, locations, palm orientations, and facial expressions – as opposed to the relatively limited number of components available in spoken languages.

(Lucas and Valli Reference Lucas and Valli1992:30–31)

The paradox is that while there may be more basic components in signed language, various signed languages in their present forms seem to share a significant percentage of those large sets. Visual iconicity in signed languages – such as the use of deictic forms, polycomponential signs, and construction action – perhaps contributes to this situation.

In terms of grammatical items, there are cross-linguistic differences in some aspects of sign language morphology and syntax, such as word order (Newport and Supalla Reference Newport, Ted, Emmorey and Lane2000), the existence of auxiliary verbs that use locations in the signing space to indicate subjects and objects (Rathmann Reference Rathmann2000; Quadros Reference Quadros1999), and even the grammar of negation (Pfau Reference Pfau, Meier, Cormier and Quinto-Pozos2002). By some accounts, syntax is the level of structure in which sign languages may most closely resemble spoken languages. Yet, sign languages seem to demonstrate similar morpho-syntactic structures. They all appear to have different categories of verbs, e.g. verbs that indicate the subject and object of the verb by movement through space and verbs that rely on word order for the assignment of case. Other examples of cross-linguistic similarities concern aspectual modifications to verbs, the use of pronouns, and the use of polycomponential signs.

Additionally, sign languages appear to possess a base level of lexical similarity that is greater than that found for spoken languages, a fact that is likely influenced by a significant degree of iconicity in sign languages. Iconicity is a complex phenomenon, but for the purposes of this chapter it can be defined as the ways in which a signer creates visual correspondences between her own body (hands, arms, torso, head, etc.) and the referent. The modest degree of lexical similarity between sign languages is even true for sign languages with no known historical or genetic relationship. The potential for visual iconicity in the signed modality influences signed languages in this regard. However, there are many lexical items in sign languages that are not considered iconic and others that have become less iconic over time. A higher degree of iconicity can make it difficult for the CS analyst to determine if a particular sign – especially if it is very iconic – is really a sign of one language and not the other or just a visually meaningful way of representing a concept that may not be a lexical item in either language. While common methods for determining lexical similarity across sign languages are useful, they are also somewhat arbitrary and may not reflect the ways in which signers recognize and process signs. For instance, similarity has been determined by comparing articulations across the parameters of sign formation, and a similarly articulated sign is one that is determined to share at least two of the three values of the major phonological parameters (handshape, movement, and place of articulation) (Guerra Currie et al. Reference Guerra, Anne-Marie, Meier, Walters, Meier, Cormier and Quinto-Pozos2002). As will be noted later, this method, while useful for various analyses, may allow for important information to be overlooked.

In summary, when one considers the various ways in which sign and spoken languages differ from each other, it becomes clear that analyses of CS in signed language are challenging. There are times when it is not clear how to differentiate the languages used in a particular utterance. However, there are also instances when signs that are unique to one or the other language can be identified, and sometimes a switch occurs at a location where such signs occur in sequence. This is, perhaps, the best place to begin a discussion of CS in sign. This chapter provides some illustrations of what seem to be clear cases of CS between two sign languages, although the presentation will also include various examples of issues in the labeling of meaningful elements.

13.3 Code-switching in sign

13.3.1 Code-switching between sign and speech

Most of the work on CS in signed language focuses on the interaction between a signed and a spoken language. Some researchers have looked at the manner in which the interlocutor’s language background and language use influence the form that CS takes as it is performed by Deaf adults (Hoffmeister and Moores Reference Hoffmeister and Moores1987; Kuntze Reference Kuntze, Emmorey and Lane2000; Lee Reference Lee1983), while others have focused on the language of Deaf children (Kachman Reference Kachman1991).

A common theme of the sign–speech work on CS involves the various ways in which a signer can produce elements from the spoken and the signed language simultaneously. As noted earlier, the use of multiple articulators (the hands, face, etc.) at once is common in signed languages. For instance, Davis (Reference Davis and Lucas1989, Reference Davis1990) refers to the simultaneous mouthing of English words with the production of ASL signs in his data of English–ASL interpreters and their voice-to-sign productions as code-mixing. As noted earlier, the challenge for analyzing this type of language contact phenomenon is that two meaningful elements can co-occur, so determining the source (e.g. English or ASL) of the two elements in sequence is problematic. An example of a signed language interpreter producing code-mixing, as adapted from Davis (Reference Davis and Lucas1989:93), is found in (1). Following conventions for the transcription of ASL, signs are represented by English words in capital letters, dashes that separate the letters within a word represent fingerspelling, and non-manual signals are indicated immediately above the English glosses of the ASL signs with which they co-occur.

(1)

(1)

mouthing: most households mouth: mm MOST U-S HOME . . . IN-GENERAL

“Most households in the United States. . .”

In (1), the interpreter signs MOST U-S HOME while mouthing the English words “most households.” This simultaneous phenomenon is what Davis refers to as code-mixing. Then, the interpreter signs IN-GENERAL while producing an ASL mouth movement (“mm”), also considered a non-manual signal, that is a common non-manual modifier of various signs. Note that there is a switch from the ASL mouthing to the ASL non-manual signal in this segment as well. It seems apparent that a CS analysis of ASL and English needs to take into account the simultaneous code-mixing of the two languages along with sequential CS.

Lucas and Valli (Reference Lucas and Valli1992) note that CS following spoken language criteria would mean that the language user would need to completely change from one type of language production (e.g. signing) to “switch” to the other type of production (e.g. speaking). That type of CS is mostly not the focus of the works mentioned previously, but this phenomenon has been reported to occur, albeit minimally, in the language use of people who are fluent in both languages. Petitto et al. (Reference Petitto, Katerelos, Levy, Gauna, Tétreault and Ferraro2001) and Emmorey et al. (Reference Emmorey, Borinstein, Thompson, Cohen, McAlister, Rolstad and MacSwan2005) suggest that this type of CS is relatively rare – comprising approximately 5–6% of switches in their corpora. Petitto et al. reported this result based on the development of three hearing children – all less than five years old at the commencement of one year of data collection – acquiring Quebec Sign Language (Langue des Signes Québécoise, LSQ) and French simultaneously, whereas the Emmorey et al. study focused on the language use of eleven ASL–English adults who acquired both languages natively. Petitto et al. report that one child performed the sequential switch found in (2):

(2)

(2)

Ça ressemble MOUCHOIR this resembles [facial] tissue

“This looks like facial tissue.”

One example of ASL–English consecutive CS as reported by Emmorey et al. is the following:

For example, after saying “pipe,” participant 2 then produced an ASL classifier construction indicating a vertically-oriented thin cylinder without any accompanying speech.

Interestingly, both studies reported very similar percentages of sequential CS in two different spoken–signed language pairs and in both adults and children.

As expected, since these hearing bilinguals produced sequential CS approximately 5% of the time, the majority of the language mixing can be categorized as code-blends – the simultaneous production of a spoken word with a semantically equivalent sign. Code-blending has been described as being different from Simultaneous-Communication (Emmorey et al. Reference Emmorey, Borinstein, Thompson, Cohen, McAlister, Rolstad and MacSwan2005). The difference between code-blends and code-mixes (as defined here) is that the former involve the use of speech – along with sign – while the latter involve the voiceless mouthing of words.

Contact between ASL and English has also been described in terms of CS that occurs for some Deaf users of ASL and Cued Speech – a way to make spoken language visible through the use of manual “cues” articulated by a hand of the cue-er. Consonant and vowel sounds are represented by the hand in this system and, in theory, any spoken language can be “cued.” Hauser (Reference Hauser and Metzger2000) describes the signing of a ten-year-old girl who is fluent in both ASL and cued English and how she code-switches between the two forms of manual communication. An example from Hauser (Reference Hauser and Metzger2000:65) is found in (3); the Cued English is represented in non-capital letters and ASL in capital letters.

(3)

. . . brothers are WAKE-UP so woke up so TIRED so I said . . .

In this example, the person is switching sequentially between a manual form of English, which represents the sounds of the language, and ASL.

13.3.2 CS between signed languages

Thus far, all the examples of CS that have been described concern the mixing of a signed language and some form of a spoken language. It seems that little work has been done on the mixing of two signed languages, and examples of sign–sign CS are mostly lacking in the literature. One work that does provide some examples of such phenomena is Quinto-Pozos (Reference Quinto-Pozos2002), and it focuses on contact between ASL and Mexican Sign Language (LSM) along two areas of the Mexico–US border in Texas. There are other areas of the world where one would expect contact between two signed languages, although it appears that no published works exist that document such contact. One such area might be along the border of two provinces of Canada, Quebec and Ontario, where different signed languages, Quebec Sign Language (LSQ) and American Sign Language (ASL), are used. Contact between signed languages may also occur in parts of Spain, where Spanish Sign Language (LSE) and Catalan Sign Language (LSC) are used by populations of Deaf signers.

For the study of LSM and ASL contact, Quinto-Pozos (Reference Quinto-Pozos2002) videotaped interactions between users of Mexican Sign Language (LSM) and American Sign Language (ASL) who live on the United States side of the US–Mexico border. Both LSM and ASL have been reported to be historically related to the Old French Sign Language (OLSF) of the 1800s (Guerra Currie Reference Guerra and Anne-Marie1999; Adams Reference Adams2003), although the two languages are distinct and not mutually intelligible (Faurot et al. Reference Faurot, Dellinger, Eatough and Parkhurst1999). Yet, there do exist lexical and grammatical similarities between the two languages.

The CS data reported in Quinto-Pozos (Reference Quinto-Pozos2002) come from deaf signers who were fluent bilinguals in the two languages and others who were mostly proficient in one of the two languages. The data collection involved group discussions (four participants per group in each of two locations) and one-on-one interviews, and those sessions were examined for various contact phenomena in the signed modality.

13.3.2.1 Reiterative CS

One type of CS described in Quinto-Pozos (Reference Quinto-Pozos2002) is the switching of synonymous signs. In these cases, each of these code-switched elements was produced after a participant would articulate a semantically equivalent sign from the other language that differed in form. Of the 40 switches of this type from 64 minutes of conversation, more than half (n = 23) were nouns, one-fifth (n = 8) were verbs, one-eighth (5) were adjectives, and there were also a couple of possessive pronouns and adverbs. A seemingly similar type of CS in spoken language contact situations has been termed reiteration (Auer Reference Auer, Milroy and Muysken1995; Eldridge Reference Eldridge1996; Pakir Reference Pakir1989; Tay Reference Tay1989). This is the phenomenon of a message in one code being repeated in another code. Various social functions have been attributed to the phenomenon of reiterative CS and among them are: negotiation of a collective social identity, accommodation, amplification of a message, emphasis, reinforcement or clarification of a message, and attention-getting, as in the regulation of turn-taking (Pakir Reference Pakir1989; Tay Reference Tay1989; Auer Reference Auer, Milroy and Muysken1995; Eldridge Reference Eldridge1996). In the LSM–ASL data that were analyzed by Quinto-Pozos (Reference Quinto-Pozos2002), the CS seems to have served several of the social functions just mentioned such as emphasis, clarification, accommodation, and reinforcement. However, there are also cases where the functions of switching are not clear.

In many cases, CS (not only reiterative switching) can serve what Appel and Muysken (Reference Appel and Muysken1987:119) call a directive function – the desire to “include a person more by using her or his language.” This directive function that Appel and Muysken describe is not unlike the concept of accommodation that Pakir (Reference Pakir1989) described as a function of reiterative CS. The first example from Quinto-Pozos (Reference Quinto-Pozos2002) can be seen in (4), where the code-switched sign is bolded. In these examples, LSM signs are represented by Spanish words in capital letters and those from ASL are indicated via English. Points are indicated by their form (e.g. “point to finger”) or by their function in the case of pronouns (e.g. ME/YO).

(4)

(4)

point: middle finger (for listing) TOMATO TOMATE ADD-INGREDIENTS MIX gesture: “thumbs-up”

“(. . . and then you take) tomatoes and you add them to the other ingredients and mix everything together. It’s great.”

Example (4) contains a few items that do not allow for easy classification as elements from LSM, ASL, both languages, or neither language. One example is the point that begins the segment; it seems to be a common listing strategy that is not attributable to only one of the sign languages. Also noteworthy is that the ASL signs indicated as add-ingredients and mix are quite transparent (or iconic), although they have been labeled as ASL simply because they were not confirmed by the author to also be LSM signs. It is likely that both of those purported ASL signs would be understood by signers of both languages. Example (4) does contain CS, however, and that is the focus of the following discussion.

In (4), the bilingual interviewer was mostly looking in the direction of two users of LSM who were raised in Mexico as they recapped cooking instructions that were presented earlier by another participant. The interlocutors who engaged the signer frequently produced LSM signs in other segments of the discussion, which is why the signer may have made a conscious decision to add the LSM nominal sign tomate after the ASL sign tomato. There was a very brief pause between the sign tomato and tomate, which gives the code-switched item a certain degree of emphasis. In some respects, the code-switched sign could also be viewed as a clarification – a sign used to clarify an ASL sign that might not be entirely familiar to at least one of the other participants. Also, note that the final meaningful element in (4) is the emblematic gesture “thumbs-up,” which provides a positive comment about what had just been described. Whether or not such an element should be considered an LSM or ASL sign – having become lexicalized into either or both of those languages – is another question that should be addressed, and this would also apply to other emblems that are used within the sign stream.

In another example of CS from the group discussions, the interviewer code-switched a verb while asking a question about what one of the participants regularly does for her birthday. The sequence of signs that contains that verb appears to be a serial verb construction, a type of syntactic construction that is common in ASL and perhaps other sign languages (see Supalla Reference Supalla, Fischer and Siple1990). The example can be found in (5), and the code-switched item is in bold. Like (4), the example given in (5) shows the use of deictic points to a second-person singular interlocutor. Such points would be produced in either language, although they are also common outside of the two languages within the gestural communication of hearing people. They have been labeled in (5) as pronouns from both languages.

(5)

(5)

TÚ/YOU CUMPLEAÑOS TÚ/YOU HACER FORM-GROUP INVITE 2sg birthday 2sg do INVITAR SELF TÚ/YOU INVITAR invite yourself 2sg invite

“For your birthday, do you usually invite people to get together? Do you do that (yourself)?”

In (5), there are several clear switches from unique signs in one language to unique signs in the other. For instance, hacer (“do”) to form-group is the first clear switch, and invitar to self is the second. It very well could be the case that the second-person singular switches were influenced by only one of the sign languages, although the surface forms do not allow for such a determination.

In (5) the signer did not pause, even briefly, before the code-switched item. Thus, this example does not exhibit the emphasis that characterized the CS in (4). Yet, this example might still function as accommodation or even identification with the other signer. The interlocutor who held the signer’s gaze during this sequence was one of the participants who produced the most LSM in the border data collection sessions. Further, the interview session with that interlocutor was characterized by relatively large amounts of LSM production. As in that interview session, the interviewer, during this example from the group discussion, may have presumed that this particular interlocutor preferred LSM and thus made an effort to produce LSM signs. This type of CS can also be described as serving a reinforcing function, which is one of the roles that reiterative CS has been claimed to perform.

In example (5) it is not clear what function the code-switched element served. This is also true of other examples of reiterative CS that occurred in the group discussions and interviews as reported in Quinto-Pozos (Reference Quinto-Pozos2002). During a discussion of whether or not participants’ families are Deaf or hearing and how the participants communicate with their families, a Deaf female participant who was raised in Mexico commented on the fact that most of her family are hearing. The example is given in (6), and the code-switched item, the LSM noun familia, is in bold. Similarly articulated signs (those that differ by the value of one phonological parameter at most but that are similar in meaning) in LSM and ASL are represented in capital letters in Spanish and English separated by a slash.

(6)

(6)

NO/NO ME/YO NO/NO++ ME/YO gesture: shake-finger DEAF/ SORDO __ head shake for negation __ gesture: “wave hand to negate” ME/YO FAMILY FAMILIA MY/MI gesture/emblem: “well”

“As for me, my family is not Deaf. Oh well.”

In (6), there was no pause between family and familia. The sign familia was not stressed and no other means were used to draw attention to this sign. This does not seem to be a clear case of emphasis. Further, while the female participant signed familia she was looking at another participant who signed mostly ASL during the group discussion and interviews. Thus, this code-switch does not seem to be a case of accommodation either. Perhaps this instance of CS was intended to display an identification with the interlocutor, but there are no explicit features (emphasis of the sign, a pause, change of eyegaze, etc.) that would suggest what the signer’s intent was when she produced this code-switch. As can be seen, the reason(s) for using CS are not always clear. Sometimes there are no explicit features that would suggest that the code-switch was deliberately produced for a specific reason(s). Thus, lists of CS functions do not seem to account for all instances of CS.

Another point that is noteworthy about (6), in consideration of the challenge for determining how to attribute various elements in the sign stream, is that the only signs that are unique to the two languages are the signs family and familia – the actual location of the CS. So, while it is possible to note that the signer code-switched here, it is not possible to determine how to label the other meaningful elements (points, similarly articulated signs, and common gestures) within the sequence. This is a problem for CS analyses that rely on clearly identifying the source language for each lexical item.

13.3.2.2 Non-reiterative code-switching

The corpus of US–Mexico border data collected by Quinto-Pozos also contains examples of CS that are not of the reiterative variety. These are presented here to further illustrate why it is often difficult to clearly attribute a meaningful element from the sign stream for purposes of CS analyses. In the examples presented in this section, the code-switched item does not follow a semantically-equivalent sign from the other language, as was done in examples (4) through (6). In some cases the switches are of single signs, but the switch might also contain a sequence of signs and/or other meaningful productions (e.g. polycomponential signs, gestural productions, and/or constructed action).

In the first example of non-reiterative CS, the participant describes how candy from Mexico is quite different from that of the United States. The signer is left handed, and that is her default dominant hand for signed language production. But, as will be noted in (7), she switches hand dominance for a short sequence of signs for comparison purposes, which is a common non-manual strategy for providing comparisons in ASL and perhaps in other sign languages as well. In (7) through (9), the bolded item represents a clear switch from a previous sign or sequence in the other language. Any sign unique to either LSM or ASL that immediately follows a similarly articulated sign is not represented in bold in (7) because it is not clear if the similarly articulated sign should be labeled an LSM or ASL production using the current methods of sign analysis. Other transcription conventions pertinent to this example include: [lh] or [rh] to indicate an articulation with the signer’s left or right hand, and the use of “+” to indicate a single repetition of the sign, “PS2” (or Polycomponential Sign 2) to indicate a possible item from the set of so-called handle classifier forms, and “CA:” with a brief description of the signer’s enactment to indicate the use of constructed action.

(7)

(7)

BUT [lh]: point: upward and leftward FOOD/COMIDA DULCE there sweet/candy DIFFERENT/DIFERENTE HOT+ PICANTE-CHILE DELICIOSO spicy delicious [rh]: point: downward CA: signer tasting candy/PS2: holding a small item here LOUSY BUT [lh]: CHOCOLATE [ASL] DELICIOUS point: upward and leftward CHOCOLATE [ASL] LITE HIGH DULCE RIGHT point-TV3 HIGH sweet/candy

“However, the candies in Mexico are different; they are spicy and delicious. Here, they are lousy. But, the chocolate here is also sweet. There the chocolate is lighter and not so full of sugar as it is here.”

As can be noted, (7) contains two examples of LSM-ASL similarly articulated signs (food/comida and different/diferente), and it also contains several examples of pointing, but there are also several unique signs from each of the languages. LSM signs include dulce, picante-chile, and deliciosa. ASL signs include but, hot, lousy, chocolate, lite, right, and high. The passage begins with an ASL sign as a conjunction, and ASL signs outnumber LSM signs. One could claim that the passage seems to contain more ASL than LSM (lexically and in terms of grammatical function words), although the similarly articulated signs, points, and brief use of constructed action present challenges for CS analyses.

The next two examples include instances of polycomponential signs. The sequence in (8) is about the preparation of a food dish in Mexico that does not need to be cooked before serving; [bh] indicates the signer’s articulation with both hands; PS3 is a size and shape specifier.

(8)

(8)

THAT (response to the interviewer) [bh]: PS3: meat exiting a grinder [bh]: PS3: motion of gears that are grinding something TOMATO ONION4 PICANTE/CHILE MIX spicy [rh]: PS3-shape of bowl,[lh]: PS2: stirring as if with spoon NOT NEED FUEGO/FIRE NOT NEED FUEGO/FIRE COLD/FRÍO EAT/COMER DELICIOSO delicious

“That’s it. It’s the tomato, onion, chile that is mixed together and you stir it up in a bowl. You don’t need to cook it; you just eat it cold. It is delicious.”

Note that the sequence in (8) begins with an ASL sign (that), which is an affirmation of what the interviewer had just signed. The signer goes on to describe a grinding action with polycomponential signs and follows that with two ASL nouns and an LSM noun. It is not clear if the polycomponential signs should be analyzed as LSM structures, ASL structures, or both. Three other elements surface that are difficult to label as either ASL or LSM, and they are the highly iconic sign mix, a polycomponential sign depicting the side of a large bowl, and the signer showing the mixing of something in the bowl. The final sign of the entire sequence is the LSM sign delicioso, an adjective that describes the food that is prepared in that manner. In some respects, this sequence appears to have more of an ASL character because of the several ASL noun signs (tomato, onion), and the negation and modal signs (not need) that were signed twice. However, it should be noted that even though fuego/fire is coded as being a similarly-articulated sign, there are nonetheless some differences between fuego and fire, specifically, articulations for hand internal movement, path movement, and whether or not the fingers are fully extended or bent. See Table 13.1 for a comparison of the two signs. Even though fuego/fire seem to differ in several ways, an analysis that would consider only the three major parameters of sign formation for the determination of similarly articulated signs might lose the distinctions between the signs (either because the two handshapes would be considered variants of a 5-handshape or because the up-and-down path movements would be considered similar, even though one occurs with a circular movement and one does not). However, if one were to consider more fine-grained phonetic analyses of the two signs, the results would likely suggest that the sequence with the purported similarly articulated sign should probably be shown as in (9):

(9)

NOT NEED FUEGO NOT NEED FUEGO

Then, the code-switching analysis could focus on the switch between a verb (need) and its object (fuego).

Table 13.1 Differences between LSM FUEGO and ASL FIRE

The final example, given in (10), also includes polycomponential signs. This sequence describes a participant explaining to the interviewer and the others in the group that it is easier to understand a written Spanish recipe than a written English recipe. In this example, the item denoted as “PS1” should be considered within the entity category of polycomponential signs.

(10)

EASY (10+)

BUT [lh]: ME/YO MÉXICO THINK MÉXICO FÁCIL

[rh]: point-downward TOUGH INGLÉS PS1:flat object CA:signer looks at paper

[lh]: UNDERSTAND EXPLAIN LONG/LARGO PS1: paper CA: signer looks at paper

WRONG TIME TWO/DOS WRONG THREE TIME BIEN/GOOD

“That’s easy [referring to reading Spanish]. I think it’s easy in Mexico. Here [United States] English is tough. In order to understand something written in English, it takes a long time. Sometimes I get something wrong two times, then the third time is fine.”

In (10), the signer begins with ASL signs, interjects LSM signs for the country Mexico, but then also uses a mixture of LSM and ASL signs in the next few signs. The adjective fácil (“easy”) is in a clause (assumedly beginning with the conjunction but and ending with the adjective) that has only two signs that are clearly from LSM: méxico and fácil. Although, as mentioned earlier, the country signs used in foreign sign languages (e.g. japan in Japanese Sign Language, méxico in LSM, etc.) are more common in ASL than they used to be; perhaps they could currently be considered borrowings. This complicates matters because now it is not clear if the clause is mostly LSM or ASL, and there are no function word signs in this sequence to provide information about which grammar is being utilized at various points in the sign stream. Further, it is not clear how to label the source language of the polycomponential sign (to represent a paper or other written document) and the constructed action of the signer gazing at the paper.

Based on the data presentation and the brief discussion of examples (7), (8), and (9), it is clear that some examples of CS are quite challenging to analyze because of the issues raised earlier. Note that there is little discussion of non-manual signals (mouth gestures, eyebrow movements, torso shift, etc.) in these passages, which would present yet another example of simultaneous articulations that would need to be examined. Additionally, possible switches could be lost because of the current system of classification for similarly articulated versus non-similarly articulated signs. Also, the segments that contain polycomponential signs and constructed action are particularly difficult to attribute to one language or the other. All of these issues create challenges for CS analyses in sign.

13.4 Conclusion

As was suggested throughout the data presentation section, CS analysts of sign data are faced with the challenge of determining how to label some of the meaningful elements from a signed conversation. And, such labeling should occur before the data can be examined within any particular model or theory. Current frameworks for CS analyses are primarily based on sequential analyses of meaningful elements (i.e. words, bound morphemes, etc.) without taking into account the alternations with gestural material or influences from visual iconicity that occur in the signed modality. There are also challenges in sign analyses that have to do with simultaneity in that modality. These issues pose challenges and difficulties for analyses of sign data.

Yet, in some cases, sequential CS can be identified and analyzed for sign. This is true for the LSM–ASL reiterative switches presented in examples (4), (5), and (6), and these examples do not seem to contain simultaneous articulations that would leave the researcher wondering how to label each of the code-switched signs. They also contain sign pairs – the reiterative switch and the sign that precedes it – that are clearly articulated differently in the two languages. Even though ASL and LSM are related historically and have similar phonologies, the lexical items in those examples differ from each other, which allows the analyst to determine when the signer is producing one language versus the other. However, those examples also contain the use of gesture (both in the form of widely used emblems and also in the form of constructed action) that alternates with signs, and some of the signs are highly iconic. This can be problematic for language labeling.

Signed languages have some structures that pose challenges for the CS researcher. In particular, the simultaneous nature of sign (e.g. polycomponential signs, code-mixes, and code-blends), the apparent similarity of some sign language structures, and the interaction of signs with non-linguistic gestures need to be considered carefully. One way to address those challenges is to produce more fine-grained descriptions of the phonetic, phonological, morphological, and syntactic structures of signed languages. Knowing, in specific ways, how sign languages differ from each other will allow for the examination of possible examples of CS between such languages. Further, lexical comparisons between sign languages need to concern themselves with more than the major parameters of sign formation; specific details of orientation, finger positions, contact locations, and the like are also necessary. More cross-linguistic work in sign could perhaps help to understand how polycomponential signs differ from each other (if they do), and offer suggestions about how researchers can identify differences between such signs in different sign languages. Research on how specific languages constrain or govern the use of constructed action as it interacts with the linguistic system is sorely needed. Work of this nature can inform CS analyses in sign, and will allow for the inclusion of sign data in theories and models of CS. Assumedly, theories of CS should be equally applicable to sign and speech data, but that remains to be confirmed with more empirical data.

Finally, it seems that all CS researchers should be faced with the challenge of accounting for multi-modal data. Two possible questions for such a line of inquiry could be: how does the use of gesture and demonstration influence the way people code-switch in their communication? And, how does the use of spoken words and phrases interact with emblematic gestures in spoken language conversations? Signed language data can provide exciting opportunities to consider how non-verbal ways of communicating interact with linguistic systems.

Notes

1. Despite the purported differences in speed of sign versus spoken word production, a proposition is articulated, on average, in ASL within the same time frame that a similar proposition is uttered in English (Klima and Bellugi Reference Klima and Bellugi1979).

2. The simultaneous character of natural signed languages has also been advanced as evidence for the purported ineffectiveness of invented sign systems, which happen to primarily employ sequentially affixed morphemes, to aid in the acquisition of English for Deaf children (Supalla and McKee Reference Supalla, Cecile, Meier, Cormier and Quinto-Pozos2002).

3. These constructions are also known by various other terms such as classifiers, classifier predicates, and verbs of location and motion.

4. The ASL lexical item ONION was articulated by the participant without the normal wrist-twist of that sign. Rather, contact was made between the index finger and the temple area of the signer’s head. This articulation might reflect an LSM accent in ASL, although one would need to investigate what part of the phonology of LSM influenced the signer to fail to provide the common wrist-twist of the ASL sign.

Figure 0

Table 13.1 Differences between LSM FUEGO and ASL FIRE

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×