Hostname: page-component-77f85d65b8-6c7dr Total loading time: 0 Render date: 2026-04-09T19:32:01.580Z Has data issue: false hasContentIssue false

Iconicity and semantic transparency in Hong Kong Sign Language: evidence from ratings and three guessing paradigms

Published online by Cambridge University Press:  01 April 2026

Arthur Lewis Thompson
Affiliation:
The University of Hong Kong, Hong Kong
Wing Cheung Aaron Chik
Affiliation:
The University of Hong Kong, Hong Kong
Yu On Mavies Ngai
Affiliation:
The University of Hong Kong, Hong Kong
Pui Ching Rachel Chen
Affiliation:
The University of Hong Kong, Hong Kong
Chui Yin Judy Ng
Affiliation:
The University of Hong Kong, Hong Kong
Youngah Do*
Affiliation:
The University of Hong Kong, Hong Kong
*
Corresponding author: Youngah Do; Email: youngah@hku.hk
Rights & Permissions [Opens in a new window]

Abstract

This study elicits iconicity ratings for Hong Kong Sign Language (HKSL) from L1 HKSL Deaf signers and L1 Cantonese hearing non-signers, as well as non-signer guessing accuracy, and compares these norms with other sign languages. Iconicity ratings were collected for 972 HKSL signs from Deaf signers and hearing non-signers and correlated with guesses made by hearing non-signers in three guessing paradigms, that is, three-alternative forced choice (3AFC) translation selection, 3AFC video selection and an open-ended (open cloze) response task. HKSL signs were rated for iconicity comparably to American Sign Language (ASL) and Israeli Sign Language (ISL), with Deaf signers rating signs with higher iconicity overall. We also correlated HKSL iconicity ratings across signs with synonymous translations from languages with available ratings, ASL (634 signs), ISL (158 signs) and British Sign Language (99 signs). Guessing accuracy was found to correlate with higher HKSL iconicity ratings. As for semantic transparency, 3AFC guessing results indicate that many signs are in fact ‘translucent’, whereby inference based on the context provided by answer choices allows hearing non-signers to select the target answer with high accuracy. Our open-ended guessing task yielded considerably lower accuracy; however, accurate responses (2,183 of 15,228) were found to correlate with higher iconicity ratings.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press

1. Introduction

Iconicity is a perceived resemblance between form and meaning. In sign languages, this is when a sign, or a part of a sign, looks like what a sign means or some facet of what that sign means. Perceiving a sign as iconic is possible through a cognitive process known as structure mapping (Emmorey, Reference Emmorey2014; Taub, Reference Taub2001), involving three steps: (1) image selection (i.e., What mental imagery best corresponds a referent?), (2) schematization (i.e., How can that mental imagery be shaped to better correspond with units of language?) and (3) encoding (i.e., Which aspects of the mental imagery which units of language?). Take HOUSE for example: the image of a square structure with a pitched roof is first selected, and that image is then pared down to just a pitched roof, which is encoded using two flat palms leaning toward each other and making contact at the fingertips. The extent to which a linguistic form is a good fit for its intended referent is subjective. Ratings have been collected for spoken and signed language to gauge where in the lexicon and to what extent iconicity is found per lexical item (see Fuks, Reference Fuks2023 for review; Caselli & Pyers, Reference Caselli and Pyers2017; Hinojosa et al., Reference Hinojosa, Haro, Magallares, Duñabeitia and Ferré2021; Occhino et al., Reference Occhino, Anible, Wilkinson and Morford2017; Perry et al., Reference Perry, Perlman and Lupyan2015; R. L. Thompson et al., Reference Thompson, Vinson, Woll and Vigliocco2012, A. L. Thompson et al., Reference Thompson, Akita and Do2020; Vinson et al., Reference Vinson, Cormier, Denmark, Schembri and Vigliocco2008; Winter et al., Reference Winter, Perlman, Perry and Lupyan2017). Such studies essentially ask participants to rate how much a lexical item sounds (or, in the case of sign languages, looks) like what it means. Iconicity ratings for sign languages have been shown to correlate with other norms, such as age of acquisition (R. L. Thompson et al., Reference Thompson, Vinson, Woll and Vigliocco2012; Vinson et al., Reference Vinson, Cormier, Denmark, Schembri and Vigliocco2008) and semantic transparency (Sehyr & Emmorey, Reference Sehyr and Emmorey2019). McLean et al. (Reference McLean, Dunn and Dingemanse2023) suggest that combining guessing tasks with iconicity rating tasks provides a more comprehensive understanding of how different indirect factors influence iconicity judgments. By collecting ratings from Deaf signers and hearing non-signers and conducting guessing tasks, this paper sets a baseline for investigating iconicity and semantic transparency of signs from Hong Kong Sign Language (HKSL).

1.1. Hong Kong Sign Language

HKSL is a relatively young language (c. 1950s), endemic to Hong Kong, and has yet to receive official recognition or undergo standardization at the government level. HKSL is a direct descendant of Shanghai Sign Language (Woodward, Reference Woodward1993), a southern variety of Chinese Sign Language (Yang, Reference Yang, Jepsen, De Clerck, Lutalo-Kiingi and McGregor2015), combined with a local substrate of homesign (Sze et al., Reference Sze, Lo, Lo and Chu2013). HKSL today is unintelligible with the sign language officially propagated in Mainland China. Many Deaf schools closed due to changes in Hong Kong education policies in the 1970s, flattening the once diverse pedagogical landscape to oral- and integration-based education (Sze et al., Reference Sze, Lo, Lo and Chu2013). HKSL is now decidedly endangered without a widely recognized standard. Signing communities are dispersed, creating multiple diverse dialects across the city. HKSL exhibits noticeable signing heterogeneity that our consultants were accustomed to and recognized as a product of the language’s history, absence of signing in educational settings, and a lack of standardization, as well as age-related and community-level factors (e.g., which Deaf school a signer attended). Some signs used in our study may not be familiar to all Deaf Hong Kong participants. Additionally, high rates of withdrawal from education among members of Hong Kong’s Deaf community (Census and Statistics Department, HKSAR, 2021) present challenges for controlling experiment variables due to literacy issues and education-related barriers.

1.2. Comparing iconicity between sign languages

Another goal of this paper is to further gauge iconicity’s pervasiveness, a quality assumed to be due to iconicity’s productive power in sign language (see Hodge & Ferrara, Reference Hodge and Ferrara2022). We aim to do this by describing the distribution of iconicity throughout HKSL and comparing such distributions with those of other sign languages reported to date, that is, American Sign Language (ASL), British Sign Language (BSL) and Israeli Sign Language (ISL). Iconicity is not absolute but a gradient feature, with certain structure mapping strategies, for example, varying as to how transparent or obvious their intended referent is (Emmorey, Reference Emmorey2014; Klima & Bellugi, Reference Klima and Bellugi1979; Taub, Reference Taub2001; see Fuks, Reference Fuks2023 for review). Overall, studies are in line with Pizzuto and Volterra (Reference Pizzuto and Volterra2000), who first showed that Deaf signers intuit both iconic and non-iconic signs better than hearing participants, regardless of their native language.

A high degree of correlation has been found between Deaf signer iconicity ratings and hearing non-signer ratings (Fuks, Reference Fuks2023; Sehyr & Emmorey, Reference Sehyr and Emmorey2019; Trettenbrein et al., Reference Trettenbrein, Pendzich, Cramer, Steinbach and Zaccarella2021). Results are somewhat mixed in that, for ISL and German Sign Language (DGS), signers tended to rate signs as more iconic overall than non-signers (Fuks, Reference Fuks2023; Trettenbrein et al., Reference Trettenbrein, Pendzich, Cramer, Steinbach and Zaccarella2021), while for ASL the opposite trend was found (Sehyr & Emmorey, Reference Sehyr and Emmorey2019). However, for ISL and ASL, non-signers rated verbs higher than nouns, whereas signers exhibited no such difference. Moreover, ISL and ASL non-signers’ iconicity ratings differ according to strategies used to implement structure mapping: acting signs versus representing signs. Acting signs are when the body is used to convey an action (Müller, Reference Müller, Müller, Cienki, Fricke, Ladewig, McNeill and Bressem2014, Reference Müller, Zlatev, Sonesson and Piotr2016), for example, assuming the posture of a jogger to mean RUN, or holding one’s hand out and moving it as if to turn a key in a lock to mean KEY. Representing signs are when part of the body is taken to visually stand for a given entity through perceived physical similarities (Müller, Reference Müller, Müller, Cienki, Fricke, Ladewig, McNeill and Bressem2014, Reference Müller, Zlatev, Sonesson and Piotr2016), for example, both hands touching at the fingertips in an upward slant to represent the pitch created by the central beam of a roof to mean HOUSE. Ortega et al. (Reference Ortega, Schiefner and Özyürek2019, Reference Ortega, Özyürek and Peeters2020) explain that non-signers can recognize body movements in signs that match their motor schemas; they exploit the visual, semantic, perceptual and sensorimotor representations that signers likewise use to create gestural representations. Acting signs, which are created from these representations, are more easily understood. This explains why non-signers rated acting signs as more iconic than representing signs for ASL (Sehyr & Emmorey, Reference Sehyr and Emmorey2019) and ISL (Fuks, Reference Fuks2023) and why correct open-ended guesses correlate with high iconicity ratings (Fuks, Reference Fuks2023; Sehyr & Emmorey, Reference Sehyr and Emmorey2019; Trettenbrein et al., Reference Trettenbrein, Pendzich, Cramer, Steinbach and Zaccarella2021). The bias toward acting signs is also discussed in Van Nispen et al. (Reference van Nispen, van de Sandt-Koenderman and Krahmer2017). As both signers and non-signers need to express the same concept, under the same constraints of the manual-visual modality, non-signers and signers are skewed toward producing overlapping (identically shaped) manual structures, creating ‘manual cognates’ between the two groups (Ortega et al., Reference Ortega, Schiefner and Özyürek2019, Reference Ortega, Özyürek and Peeters2020) that can transcend language and cultural boundaries if they originate from language- and culture-independent features, although sharing the same cultural context does help non-signers in comprehension of the ‘cognates’ that arise from cultural references, for example, Italian co-speech gestures (Pizzuto & Volterra, Reference Pizzuto and Volterra2000). However, for representing signs, in which the hand(s) serve as symbol(s), a clear difference between signers and non-signers can be observed due to the difference in their semiotic repertoires (Fuks, Reference Fuks2023; Ortega et al., Reference Ortega, Schiefner and Özyürek2019) – the ‘lexicon’ of signs, symbols and gestures of an individual shaped by their linguistic and cognitive experiences. Signers, having richer and more flexible use of their semiotic repertoires, can understand and produce representing signs better than non-signers (Fuks, Reference Fuks2023, p. 55; Sehyr & Emmorey, Reference Sehyr and Emmorey2019). For non-signers, whose semiotic repertoires are smaller, not only are they more likely to interpret semiotic representations with movement-related meanings (Fuks, Reference Fuks2023), they are often left out in the cold when it comes to guessing the meaning of these signs, performing poorly in open-ended guessing tasks (Fuks, Reference Fuks2023, p. 55; Sehyr & Emmorey, Reference Sehyr and Emmorey2019).

1.3. Predictions

Based on previous findings from other sign languages, we expect Deaf signers to assign higher iconicity ratings to HKSL signs in general compared to hearing non-signers. We also predict that the iconicity ratings for HKSL signs will align closely with those of other sign languages, such as ASL and ISL (Fuks, Reference Fuks2023). Additionally, we anticipate a correlation between higher iconicity ratings and guessing accuracy, highlighting the significance of iconicity in sign recognition. The series of rating and guessing experiments in the current study is poised to enhance our understanding of iconicity and semantic transparency in sign language comprehension, potentially offering insights that extend to other sign languages.

2. Methodology

The current study adopts an array of experimental paradigms to establish iconicity-related norms for HKSL signs. Each paradigm is designed differently and follows previous investigations related to signed and spoken languages (Fuks, Reference Fuks2023; Ortega et al., Reference Ortega, Özyürek and Peeters2020; Sehyr & Emmorey, Reference Sehyr and Emmorey2019; Trettenbrein et al., Reference Trettenbrein, Pendzich, Cramer, Steinbach and Zaccarella2021; see McLean et al., Reference McLean, Dunn and Dingemanse2023 for a review of spoken language): iconicity ratings (Experiment 1), three-alternative forced choice (3AFC) guessing tasks (Experiment 2, 3) and an open cloze (open-ended response) guessing task (Experiment 4). The transparency of a task (i.e., how much information is provided) dictates how, which and to what extent participants apply their own world knowledge to that task (see Emmorey, Reference Emmorey2014). The more guesswork required, the less transparent the task. In our rating task, participants need to attempt structure mapping (between the translation and the sign) and report its fitness. Guessing tasks are therefore much less transparent. We implemented two types of guessing tasks, which we describe in turn. The first type of guessing task is the three-alternative forced choice task (3AFC), where a stimulus is presented with three answer choices. The second type of guessing task is the open cloze task, where participants must offer their own response by typing a maximum of four Chinese characters. The open cloze task is the least transparent of the two types of guessing tasks (and decidedly more bottom-up) because the target HKSL sign is presented without any answer choices or translations. Participants must solely rely on their own world knowledge (and ingenuity) to structure the map accordingly and generate a translation independently.

By subjecting HKSL signs to such a range of experimental paradigms, we hope our norms will allow for a more nuanced picture of iconicity and its relationship to semantic transparency to emerge, one directly comparable with other sign languages to carry out future explorations into potential convergences of form-meaning mappings at a more granular level (c.f. Occhino et al., Reference Occhino, Anible, Wilkinson and Morford2017; Zwitserlood et al., Reference Zwitserlood, van der Kooij and Crasborn2023). Indeed, we find that our iconicity ratings for HKSL correlate with those reported for ASL (Sehyr & Emmorey, Reference Sehyr and Emmorey2019), BSL (Vinson et al., Reference Vinson, Cormier, Denmark, Schembri and Vigliocco2008) and ISL (Fuks, Reference Fuks2023).

2.1. HKSL database

As part of the stimuli design for all the experiments in this study, we compiled the HKSL Database, which contains the 972 signs used in Experiments 1–4. The database has two parts: Part One consists of randomly chosen signs based on the English glosses of ASL signs from ASL-LEX 2.0 (Sehyr et al., Reference Sehyr, Caselli, Cohen-Goldberg and Emmorey2021), which were subsequently translated into Hong Kong Cantonese, then into HKSL. The ASL-LEX database was chosen as a reference for the compilation of our HKSL database primarily due to the availability of iconicity ratings and video data. After removing signs with no equivalent in HKSL (e.g., proper nouns GALLAUDET), 451 ASL-LEX items were used for creating Part One of our HKSL Database. Part Two consists of 521 signs selected from HKSL teaching materials published by the Professional Sign Language Training Centre and the Hong Kong Sign Language Association for beginners (labeled levels one to three, provided by our HKSL consultants). For each entry in the database, a native (CODA) HKSL hearing signer, who works as an HKSL interpreter, was filmed as they executed each sign in front of a plain background, complete with naturally-produced mouthing. As there is no standard form of HKSL, we asked the signer to use the variant they believed would be most widely understood. Each sign is coded for its handshapes using a list derived from the database by the Centre for Sign Linguistics and Deaf Studies (CSLDS) of the Department of Linguistics and Modern Languages at The Chinese University of Hong Kong (CUHK) (2018).

The HKSL signs were also encoded for lexical class to align with the experimental paradigm in Sehyr and Emmorey (Reference Sehyr and Emmorey2019). As there is currently no known rigorous and widely accepted distinction of lexical categories in HKSL, nor were such distinctions made during recording, the lexical category of the Cantonese gloss was chosen as the closest approximation. The lexical categories of the glosses were extracted from a contemporary descriptive Cantonese dictionary (words.hk, n.d.), which is encoded in Chinese. Certain adjustments were made to adapt the lexical class data to be comparable with other sign language data, namely removing closed lexical classes and other labels (e.g., 語素 ‘morpheme’, 語綴 ‘affix’, 介詞 ‘preposition’) or converting them into prototypical lexical classes (e.g., 量詞 ‘measure words’ are reclassed as nouns) and encoding lexical classes for glosses not found in that dictionary. The signs post-adjustment are encoded for one or multiple of the following lexical classes: 名詞 ‘noun’, 動詞 ‘verb’, 形容詞 ‘adjective’, 副詞 ‘adverb’, 代詞 ‘pronoun’, 語句 ‘phrase’, 方位詞 ‘locality’ and 連詞 ‘conjunction’.

2.2. Iconic strategies coded according to ARTO

Iconic strategies (or ‘modes of representation’) describe how a sign resembles its referent. Sign languages are known to adopt different iconic strategies to execute signs whose translations are synonymous in spoken languages (Padden et al., Reference Padden, Meir, Hwang, Lepic, Seegers and Sampson2013), and iconic strategies have been discussed with differing methods and scopes of categorization (Hwang et al., Reference Hwang, Tomita, Morgan, Ergin, İlkbaşaran, Seegers, Lepic and Padden2017; Müller, Reference Müller, Müller, Cienki, Fricke, Ladewig, McNeill and Bressem2014; Ortega et al., Reference Ortega, Schiefner and Özyürek2019). We follow descriptions of commonly identified iconicity strategies in the literature to code each HKSL sign, that is, Acting, Representing, Tracing and Orthographic (ARTO), based purely on the phonological parameters of the hands. We call this the ‘ARTO schema’ and all HKSL signs were coded according to this schema. The following examples used to illustrate the ARTO schema are all from HKSL.

Acting signs, or ‘handling’ or ‘pantomime’ signs (Padden et al., Reference Padden, Meir, Hwang, Lepic, Seegers and Sampson2013, Reference Padden, Hwang, Lepic and Seegers2015), involve the hand(s) enacting a human motion, for example, moving up and down as in JOGGING or holding as in KEY (Müller, Reference Müller, Müller, Cienki, Fricke, Ladewig, McNeill and Bressem2014; Ortega et al., Reference Ortega, Schiefner and Özyürek2019; Padden et al., Reference Padden, Meir, Hwang, Lepic, Seegers and Sampson2013, Reference Padden, Hwang, Lepic and Seegers2015). Representing signs involve the hands standing in for another object, for example, a wall in OUTSIDE or petals in FLOWER (Hwang et al., Reference Hwang, Tomita, Morgan, Ergin, İlkbaşaran, Seegers, Lepic and Padden2017; Müller, Reference Müller, Müller, Cienki, Fricke, Ladewig, McNeill and Bressem2014; Ortega et al., Reference Ortega, Schiefner and Özyürek2019; Padden et al., Reference Padden, Meir, Hwang, Lepic, Seegers and Sampson2013, Reference Padden, Hwang, Lepic and Seegers2015) (see Figure 1). Tracing signs, also known as ‘drawing’ signs, involve the hand(s) drawing the outline or trajectory of an object (Hwang et al., Reference Hwang, Tomita, Morgan, Ergin, İlkbaşaran, Seegers, Lepic and Padden2017; Müller, Reference Müller, Müller, Cienki, Fricke, Ladewig, McNeill and Bressem2014; Padden et al., Reference Padden, Meir, Hwang, Lepic, Seegers and Sampson2013, Reference Padden, Hwang, Lepic and Seegers2015), for example, the shape of a crescent in MOON or the width of a trunk in TREE. Orthographic signs involve the hands forming a configuration which resembles that of a Chinese character, for example, PERSON 人, FAIRNESS 公 (from 公平 ‘fair’), MIDDLE 中. The orthographic strategy has not been discussed as its own strategy in previous studies. We do so here to account for the semantic nature of the Chinese writing system, in which each character generally corresponds to a single morpheme and thus a unit of meaning. Where representing signs relate to their referent through structure mapping or analogy, orthographic signs go through an additional step, where the sign relates first to the written character, then is associated to the referent of that character. Nonetheless, the existence of orthographic signs means signers have another iconic strategy at their disposal. Finally, ‘None’ was used to label signs where none of the four categories apply, i.e., the hand(s) depict neither themselves nor any visual features of the referent, as in PAIN. Note that, despite the facial expressions in some signs may contain a certain degree of iconicity, the ARTO schema focuses solely on the hands and summarily does not consider facial expressions, including mouthing (Figure 2 and Table 1).

Figure 1. Example of objects represented in Representing signs.

Note: Red highlights the handshape element of interest. From left to right: OUTSIDE 外 (Representing: a wall [the non-dominant hand]); FLOWER 花 (Representing: flower petals).

Figure 2. Examples of iconic strategies in signs according to ARTO schema.

Note: Red highlights the handshape element of interest. From top left to bottom right: KEY 鎖匙 (Acting: unlocking with a held key); OUTSIDE 外 (Representing: a wall [the non-dominant hand]); PIPE 管 (Tracing: outline of a pipe); PERSON 人 (Orthography: the Chinese character 人 ‘person’ from the signer’s perspective); PAIN 痛 (No iconic strategy present in handshape).

Table 1. Iconicity strategies used to code all signs in the HKSL database following the ARTO schema

Unlike previous studies, which describe one sign with one iconic strategy, we further distinguish the iconic strategies used in each constituent sign of compound signs (i.e., signs composed of multiple signs sequentially) and in each hand for asymmetric signs.Footnote 1 The ARTO schema provides a notation for multiple strategies found in compound signs and asymmetrical signs: where a sign is asymmetrical, the iconic strategies are denoted with an upper- and lowercase codepair, where the uppercase denotes the dominant hand. The ARTO schema coding of each constituent sign in compound signs are represented as a comma-separated sequence (e.g., ‘O,A’), growing as long as the sequence of signs in the compound. Figure 3 demonstrates these notations.

Figure 3. ARTO schema coding of a compound sign and an asymmetrical sign.

Note: Left: UNIVERSITY 大學 is a compound sign of two signs; firstly the Chinese character 大 ‘big’ (Orthographic) and secondly the sign for READ/STUDY 讀 (Acting) articulated (‘mouthed’) here with open and unrounded lips corresponding to the unrounded vowel in the second syllable of the Cantonese word 大學 /tai˨ hɔk˨/ ‘university’ (whereas the same sign articulated with pursed lips is used for the monomorphemic form of READ/STUDY corresponding to the rounded vowel of the Cantonese word 書 /sy˥/ ‘book’). Right: WARN 警吿 is an asymmetrical sign; the dominant hand (red) depicts the action of warning (uppercase, Acting) and the non-dominant hand (blue) depicts a placeholder person (lowercase, Representing). These two signs are coded ‘O,A’ and ‘Ar’, respectively, as denoted in the top left corner.

As a description of iconic strategies, the ARTO schema does not reliably distinguish mono- or multisyllabic signs. A syllable consists of one movement or two simultaneous movements, or simply an internal movement (Sandler, Reference Sandler2008), or static handshape(s) with no movements (Mak & Tang, Reference Mak, Tang, Channon and van der Hulst2011). Thus, both monosyllabic and multisyllabic signs are also coded with one (or one pair of) ARTO schema coding(s). ARTO schema coding was carried out by the two Deaf HKSL consultants (57 yrs male, 39 yrs female), who are native signers of HKSL, and one hearing native speaker of Hong Kong Cantonese (26 yrs, male) and one hearing native speaker of English (33 yrs, male), both of whom are knowledgeable of HKSL. We filtered out the signs with coding that are different between the two Deaf signers. After filtering, there were 664 entities in the current sign language set that could be unambiguously identified as A (n = 130), R (n = 201), T (n = 41), O (n = 15) or N (n = 277). All materials, analyses and results can be found in the OSF repository: https://doi.org/10.17605/OSF.IO/5VXZF.

3. Experiment 1: Comparing iconicity ratings between L1 HKSL Deaf signers and L1 Cantonese hearing non-signers

The goal of Experiment 1 is to compare iconicity ratings of HKSL signs between L1 HKSL Deaf signers and L1 Cantonese hearing non-signers to investigate potential differences in perception based on linguistic background.

3.1. Participants

A total of 193 hearing non-signers were recruited online at the authors’ institute (mean age = 29.27, SD = 8.25, age range 18–54 years; 130 female, 1 non-binary, 5 undisclosed), who self-reported to be native speakers of Hong Kong Cantonese.

3.2. Stimuli and procedure

All 972 HKSL signs from the database were rated for iconicity by the Deaf signers and the hearing non-signers. The signs were presented in video format. The experiment was conducted online as surveys using Qualtrics (Qualtrics, n.d.). The signs were evenly divided into nine surveys of 108 signs for hearing participants, but into six surveys of 162 signs for Deaf participants due to the discrepancy in group sizes. Signs were presented in a randomized order for every participant.

Experiment 1 was adapted from Sehyr and Emmorey (Reference Sehyr and Emmorey2019). Before the experiment, hearing participants were presented with consent information and experiment instructions in Standard Written Chinese (see https://osf.io/5qftd). Experiment instructions were also explained to Deaf participants in HKSL in addition to the written instructions. As part of the explanation, our consultants also introduced the Deaf participants to the concept of iconicity. In the experiment, the signs to be rated for iconicity were presented as a video clip, one after another. A rating scale was presented below the video with instructions telling participants to rate each sign on a seven-point scale for iconicity, with 0 being ‘not iconic at all’ and 6 being ‘very iconic’. The layout is shown in Figure 4. All participants were reimbursed for their time.

Figure 4. Experiment 1 layout (hearing participants to the left; Deaf participants to the right).

Note: The line of translation presented to the hearing participants can be found beneath the video. For better Deaf accessibility, the text size was enlarged, and complex phrases were segmented with spaces. Translations: ‘In the video, the sign means […]. Do you think the sign resembles its meaning?’

3.3. Experiment 1 Results

3.3.1. Iconicity rating: Deaf signers versus non-signers

Iconicity ratings by L1 Cantonese hearing non-signers were distributed differently from the ratings of L1 HKSL Deaf signers. The ratings of non-signers skewed toward the lower end of the iconicity scale, while the ratings of Deaf signers revealed a tendency toward higher ratings on the scale (Figure 5). For Deaf signers, the mean iconicity rating was 3.77 and for non-signers, the mean iconicity score was 3.10, and the difference between the two groups was significant, F(1,1260) = 83.18, p < .001, partial η 2 = .062. The result is opposite to that in Sehyr and Emmorey (Reference Sehyr and Emmorey2019, p. 217), where they reported a higher iconicity rating from non-signers compared to Deaf signers. A correlation analysis revealed that iconicity ratings by items from Deaf signers were moderately yet significantly correlated with non-signers’ ratings (r = .558, p < .001).

Figure 5. Iconicity rating distribution for Experiment 1.

3.3.2. Lexical class

There were 445 nouns and 189 verbs in our database of 972 signs. Unlike signs that are simultaneously coded with several lexical classes, these signs are coded exclusively as nouns or verbs. A 2 (lexical class: noun/verb) × 2 (sign language knowledge: HKSL Deaf signers/L1 Cantonese non-signers) ANOVA was employed to compare the perceived iconicity between nouns and verbs. A significant main effect of the lexical class was found overall, with verbs being rated significantly higher (Z score = .15) than nouns (−.06), F(1,1260) = 13.18, p < .001, partial η 2 = .010. This finding is consistent with that of Sehyr and Emmorey (Reference Sehyr and Emmorey2019, p. 217). When within-group data are examined, we found that verbs (−.09) were rated significantly higher than nouns (−.36) by non-signers, F(1,630) = 12.1, p < .001, very similar to that in Sehyr and Emmorey (Reference Sehyr and Emmorey2019, p. 217). However, ratings for verbs (.39) and nouns (.24) were not significantly different among Deaf signers, F(1,630) = 2.98, p = .085, presenting a similar direction illustrated in Sehyr and Emmorey (Reference Sehyr and Emmorey2019). In other words, higher iconicity ratings found from verbs than nouns are primarily from the judgments of non-signers.

Despite the differences between the two groups of participants, there was a weak but positive relationship between the ratings. For both nouns and verbs, signs rated as more iconic by L1 Cantonese non-signers were slightly more likely to be rated as more iconic by HKSL Deaf signers (nouns: r = .526, p < .001, verbs: r = .529, p < .001).

3.3.3. ARTO schema coding

Two one-way ANOVA analyses revealed significant differences in iconicity ratings among signs in different ARTO schema codings by HKSL Deaf signers (F(4,659) = 78.9, p < .001, η 2 = .324) and L1 Cantonese non-signers (F(4,659) = 55.5, p < .001, η 2 = .252), respectively. A 5(ARTO schmea coding: A/R/T/O/N) × 2(sign language knowledge: HKSL Deaf signers/L1 Cantonese non-signers) ANOVA also revealed a significant interaction effect was found between sign language knowledge of signers and ARTO schema codings of signs, F(4,1318) = 7.10, p < .001, η 2 = .015.

Among iconicity ratings by Deaf signers, Bonferroni-corrected post hoc tests showed that, the iconicity ratings of ‘acting’ signs (M = 4.74, SD = 1.10) were significantly higher than that of signs not belonging to any of the four categories (M = 2.83, SD = 1.25), p < .001, d = 1.623, that of ‘tracing’ signs (M = 4.06, SD = 1.51), p = .013, d = .580, and that of ‘representing’ signs (M = 4.35, SD = 1.05), p = .029, d = .337. Signs belonging to none of the four categories were rated with lower iconicity than ‘representing’ signs, p < .001, d = −1.286, ‘tracing’ signs, p < .001, d = −1.043, and ‘orthography’ signs (M = 4.00, SD = 1.05), p = .002, d = −.992. No significant differences were found between ‘acting’ and ‘orthography’ signs, p = .209, d = 1.623, between ‘orthography’ and ‘representing’ signs, p = 1.00, d = −.294, between ‘orthography’ and ‘tracing’ signs, p = 1.00, d = −.051, and between ‘representing’ and ‘tracing’ signs, p = 1.00, d = 0.243. For iconicity ratings by non-signers, post hoc tests presented that, ‘acting’ signs (M = 4.28, SD = 1.16) were rated with higher iconicity than signs belonging to none of the four categories (M = 2.54, SD = 1.06), p < .001, d = 1.521, ‘orthography’ signs (M = 2.86, SD = 1.60), p < .001, d = 1.242, and ‘representing’ signs (M = 3.30, SD = 1.16), p < .001, d = .856. Signs not belonging to any of the four categories had lower iconicity ratings than ‘representing’ signs, p < .001, d = −.666, and ‘tracing’ signs (M = 3.78, SD = 1.36), p < .001, d = −1.088. Finally, no significant differences were found between ‘acting’ and ‘tracing’ signs, p = .156, d = .433, between signs with no ARTO schema codings and ‘orthography’ signs, p = 1.000, d = −.280, between ‘orthography’ and ‘representing’ signs, p = 1.000, d = −.386, between ‘orthography’ and ‘tracing’ signs, p = .076, d = −.809, and between ‘representing’ and ‘tracing’ signs, p = .139, d = −.423. ‘Acting’ signs were reliably rated as more iconic, and signs with no ARTO schema codings as less iconic among both Deaf signers and non-signers (Figure 6).

Figure 6. Comparison of iconicity rating with ARTO schema codings.

Note: Error bars showing standard deviation.

4. Experiment 2: Translation selection 3AFC guessing task

To gauge the semantic transparency of signs, L1 Cantonese hearing non-signing participants with no prior knowledge of HKSL were asked to choose what they felt was the correct translation of the sign in a three-alternative forced-choice (3AFC) task. The goal of Experiment 2 is to investigate how an unfamiliar input in the form of an HKSL sign is mapped to familiar input represented by Cantonese words, aiming to understand the process of mapping and comprehension between these languages.

4.1. Participants

One hundred and three hearing non-signers were recruited online at the authors’ institute (mean age = 30.52; SD = 9.22; age range 18–54 years; 79 female, 1 non-binary, 1 undisclosed), who self-reported to be native speakers of Hong Kong Cantonese. The recruitment conditions were the same as Experiment 1 (Section 3), albeit without the recruitment of Deaf participants.

4.2. Stimuli and procedure

The signs used in this experiment were selected from the 972 signs in Experiment 1 based on the following criteria: (1) must be sequentially indivisible (i.e., not a compound); (2) rated in higher ranges of the iconicity scale in Experiment 1, that is, from four to six on the iconicity scale.Footnote 2 In order to devise appropriate foil answer choices for the 3AFC translation selection task (Experiment 2), translations were fit into a semantic space to establish their semantic relatedness with each other. The semantic space was constructed using the ‘KeyedVectors’ module from the Python library ‘Gensim’ (Řehůřek & Sojka, Reference Řehůřek and Sojka2010) to load word-embedding vectors. The ‘FastText’ vector by ToastyNews, an aggregator of Hong Kong news articles (ToastyNews, n.d.), which was trained specifically on Hong Kong Cantonese data, was used to match the Cantonese translations of the signs to the list of vectors. Modifications were made to match initially unmatched words. For example, some Cantonese words were translated into Standard Written Chinese (e.g., 唔舒服 m4syu1fuk3 > 不舒服 bat1syu1fuk3 ‘uncomfortable; sick’) as the latter is used as a written lingua franca. Any instances of Cantonese slang were also replaced with their Standard Written Chinese equivalent (e.g., M記 em1gei3 ‘M’s’ > 麥當勞 mak6dong1lou4 ‘McDonald’s [fast-food chain]’), their synonym or their hypernyms (e.g., 梳打餅 so1daa2beng2 ‘soda crackers’ > 餅乾 beng2gon1 ‘biscuits, crackers’).

After filtering the 972 signs with the above criteria and through the construction of the semantic space, three of the signs were removed as their Cantonese translation lexical class had too few signs for counterbalancing (conjunctions, i.e., BECAUSE 因為 jan1wai6, AND 同埋 tung4maai4, BUT 但是 daan3si3). Twenty-four additional signs were excluded as they could not be fit in the semantic space even after modifications were made; these signs are mostly proper nouns that have no semantic neighbors within the available signs (e.g., SELECTIVE_PLACEMENT_DIVISION 展能就業科 zin2nang4zau3jip3fo1 ‘Selective Placement Division of the Labor Department’, PLAIN_CLOTHES_POLICEMAN 便衣警探 bin6ji1ging2taam6). Due to limitations of the Qualtrics survey engine in counterbalancing, 22 signs were removed to produce identically sized groupings of lexical categories. In total, 923 signs were selected from the iconicity ratings task of Experiment 1. The videos of the selected signs were further processed by blurring the signer’s mouth movements to obscure any mouthing that might correspond to the Cantonese translation of the sign.

For Experiment 2, each sign was paired with a semantically related foil and a randomly selected foil, the Cantonese translations of which were controlled for their lexical class. The semantically related foil was created by selecting the sign five degrees removed from the target translation within the semantic space, whereas the random foil was any translation from the 923 signs. An example of answer choices presented is: True START 開始 hoi1ci2 > Semantically Related LEAVE 離開 lei4hoi1 > Random SEND_LETTER 寄信 gei3seon3. For each sign, the answer choices were presented in a random order per participant.

Before the experiment, participants were presented with consent information and experiment instructions in Standard Written Chinese (see https://osf.io/fqt9p). In the experiment, the signs were presented as video clips one after another. The question text instructed participants to choose the correct translation for the shown sign, below which were the three choices: the target, the semantically related foil and the random foil. The layout is shown in Figure 7.

Figure 7. Experiment 2 layout.

Note: Signer’s lips are blurred to conceal any mouth movements which may or may not correspond to the Cantonese.

4.3. Experiment 2 Results

To examine the significance of the non-signer’s translation selections, one-sample t-tests were conducted to compare the mean percentages against the chance level of 33%. Due to the non-normal distribution of the data, the Shapiro–Wilk test was employed instead of the one-sample t-test.

The mean percentage of selecting the target translation was 72.9%, significantly higher than the chance level of 33% (W = 423,427, p < .001). This suggests that participants chose the correct translation significantly more frequently than would be expected by chance. Similarly, the mean percentage of selecting the semantically related foil was 15.1%, which was significantly different from the chance level (W = 30,225, p < .001). This indicates that participants were moderately more likely to select the semantically related foil, although less so than the target translation. Additionally, the mean percentage of selecting the randomly selected foil was 12.1%, significantly lower than the chance level (W = 7,688, p < .001). This suggests that participants were relatively less likely to choose this foil whenever the true translation was selected.

The correlation analysis revealed significant findings regarding the relationship between the target translation and the foils. There was a strong negative correlation between the selection of the target translation and the semantically related foil (r = −.830, p < .001). This indicates that whenever the true translation was chosen, the semantically related foil was significantly less likely to be selected. Moreover, a moderate negative correlation was found between the selection of the target translation and the randomly selected foil (r = −.675, p < .001). This suggests that the randomly selected foil was moderately less likely to be chosen whenever the true translation was selected.

These findings imply that participants who selected the correct translation were less susceptible to being misled by both the semantically related and randomly selected foils.

5. Experiment 3: Video selection 3AFC guessing task

Experiment 3 is the inverse of Experiment 2: L1 Cantonese hearing non-signers were given a written translation and asked to choose one sign from three video clips (3AFC) that they felt best fit the written translation.

5.1. Participants

Fifty-six hearing non-signers were recruited online at the authors’ institute (mean age = 28.63; SD = 7.53; age range 18–54 years; 34 female, 1 non-binary), who self-reported to be native speakers of Hong Kong Cantonese. The recruitment conditions were the same as Experiment 1 (Section 3), albeit without Deaf participants.

5.2. Stimuli and procedure

Signs were further selected from the 972 HKSL signs in Experiment 1 based on the following criteria: (1) must not be a compound; and (2) share handshape in the dominant hand with at least two other signs. One hundred and eighty-eight signs were selected after filtering. As in Experiment 2, the videos of the selected signs were further processed by blurring the signer’s mouth movements to obscure any mouthing that might correspond to the Cantonese translation of the sign.

To devise appropriate foil answer choices for the 3AFC, each target sign was paired with two other signs that share the same handshape for the dominant hand. One foil answer choice is the ‘High Iconicity Foil’, a sign that was rated four or above for iconicity in Experiment 1. The other foil answer choice is a ‘Low Iconicity Foil’, rated for iconicity below four.

Before the experiment, participants were presented with consent information and experiment instructions in Standard Written Chinese (see https://osf.io/a4uwq). In the experiment, the translation was presented within the question text, instructing the participant to select the option that represents the translation. Below the translation were the videos of three choices: the target, the High Iconicity Foil and the Low Iconicity Foil, randomized in their position. The layout of the experiment interface is shown in Figure 8. Each of the translations was presented for selection once only.

Figure 8. Experiment 3 layout.

Note: Signer’s lips are blurred to conceal any mouth movements which may or may not correspond to the Cantonese.

5.3. Experiment 3 Results

To examine the significance of the participants’ video selections, one-sample t-tests were conducted to compare the mean percentages against the chance level of 33%. Due to the non-normal distribution of the data, the Shapiro–Wilk test was employed instead of the one-sample t-test.

The mean percentage for the selection of the target sign was 89.33%, much higher than the chance level of 33% (W = 17,766, p < .001). This finding indicates that participants demonstrated a strong preference for the target sign, suggesting their ability to differentiate it from the foils. In contrast, the mean percentage for selecting the High Iconicity Foil was found to be significantly lower than the chance level at 4.55% (W = 0.00, p < .001). This indicates that participants were able to differentiate the target sign from the High Iconicity Foil and were less likely to select that foil. Similarly, the mean percentage for selecting the Low Iconicity Foil was significantly lower than the chance level at 6.13% (W = 5.00, p < .001). This suggests that participants were able to distinguish the target sign from the Low Iconicity Foil.

The correlation tests revealed significant negative correlations between the selection of the target sign and both the High Iconicity Foil and the Low Iconicity Foil. The correlation between the target sign and the High Iconicity Foil exhibited a moderate negative correlation (r = −.691, p < .001), indicating that when participants chose the target sign, they were less likely to select the High Iconicity Foil. Similarly, the correlation between the target sign and the Low Iconicity Foil also showed a moderate negative correlation (r = −.760, p < .001), suggesting that participants were less likely to be influenced by the Low Iconicity Foil when choosing the target video. These results imply that participants who accurately identified the target sign were not influenced by either the High Iconicity or Low Iconicity foils.

6. Experiment 4: Open cloze (open-ended guessing) task

The goal of Experiment 4 is to assess L1 Cantonese hearing non-signers’ ability to guess the meaning of signs in an open-ended manner, enabling a more flexible and nuanced understanding of iconicity.

6.1. Participants

We examined the L1 Cantonese hearing non-signers’ uninhibited intuitions regarding the meanings of all 972 HKSL signs by conducting an open cloze (open-ended) guessing task. Two hundred and sixty-six self-reported native speakers of Hong Kong Cantonese participated in the task (mean age = 28.84; SD = 8.15; age range: 18–54 years; 91 female, 2 undisclosed), among which 126 participants completed the whole task. One participant was excluded for providing many inappropriate responses.

6.2. Stimuli and procedure

We used the same 972 HKSL signs from Experiment 1. Signs were randomly divided into nine sets, with 108 signs per set. Participants were required to complete only one Qualtrics survey containing one set of signs. Participants were shown a video clip of each sign. Participants were asked to guess the meaning of the sign by filling in the text box using up to four Chinese characters (see https://osf.io/jw9qx). They were also asked to provide a number corresponding to their confidence in the accuracy of their guess (0 unconfident – 6 very confident).Footnote 3 The layout is shown in Figure 9. After completing the experiment, the participants were not allowed to partake in other experiments involved in this study.

Figure 9. Experiment 4 layout.

Note: Left: the participant is asked to enter their guess. Right: repeating the input guess, the participant is asked to rate how confident they are about their guess from a scale of 0–6.

6.3. Experiment 4 Results

We collected 15,228 responses from participants who either fully or partially completed the task. The average number of responses for each sign is 15.7 (range: 7–23; SD = 3.53). We compared the participant’s response and the target translation for accuracy by way of orthographic overlap and semantic relatedness. A response is considered accurate in two ways: if it either (1) shares orthographic overlap (at least one Chinese character in common) and lexical class with the target translation or (2) is semantically related to the target translation (i.e., as a synonym, hyponym, or hypernym). We consider each in turn below.

Orthographic overlap means that at least one Chinese character is shared between the target translation and the participant’s response. But because orthographic overlap does not always guarantee a semantic relationship between lexical items, we additionally checked if the overlapping character is in the same semantic sense, such that a response exhibiting orthographic overlap must also belong to the same lexical class as the target translation to be considered correct. For example, if the target translation for SANDWICH is 文治 saam1 man4 zi6, then a response like 角形 saam1 gok3 jing4 ‘triangle’ would be deemed correct because (1) it contains the character saam3 ‘three’, and (2) it is a noun. A response like saam3 si1 ‘think carefully’, however, would be deemed inaccurate because ‘think carefully’ is a verb and ‘sandwich’ is a noun. All responses were manually coded for accuracy by the two native Hong Kong Cantonese speakers. Among the 1,340 responses exhibiting orthographic overlap, 166 were deemed inaccurate because they did not match the lexical class of their target translations. Responses were also considered inaccurate when their meanings were incomprehensible, for example, *平ping4 haa6 uk1 ‘*flat down house’ for the target translation 地dei6 haa6 sat1 ‘basement’.

Forms deemed semantically related to the target translation were also considered accurate (regardless of whether any orthographic overlap was attested). To assess for semantic relatedness, each response not exhibiting any orthographic overlap with the target translation was cross-checked by two native Hong Kong Cantonese speakers (22 yr female, 20 yr female). For example, if participants used a different Chinese character (i.e., heterographic homophone) to express the same concept as the target translation, then this was considered correct, for example, wong4 ‘king, emperor’ is an intelligible written form for 皇帝 wong4dai3 ‘king, emperor’. Another example is the use of Cantonese slang. For example, a response like 打機 daa2 gei1 ‘to game (i.e., gaming)’ is synonymous with the (Standard Written Chinese) target translation 玩電子遊戲 waan2 din6zi2 jau4hei3 ‘to play video games’.

14.3% (2,183/15,228) of all responses were accurate. 5.53% (843/15,228) of all responses had full orthographic overlap with the target translation, while 8.80% (1,340/15,228) had at least one character in orthographic overlap and exhibited shared semantic characteristics with the target translation (e.g., being a synonym, hyponym, hypernym, homophone).

A linear regression was employed to investigate the correlation between the iconicity rating in the ratings from hearing participants (Experiment 1) and the guessing accuracy in the open cloze task (Experiment 4). The correlation analysis revealed a moderate positive correlation between the iconicity rating and the correct rate of open cloze guessing (r = .669, p < .001) (Figure 10).

Figure 10. Correlation of iconicity ratings from hearing non-signers in Experiment 1 and guessing accuracy for the open cloze task in Experiment 4.

We also investigated the correlation between choosing the target translation in the 3AFC of Experiment 2 and the open cloze responses of Experiment 4. There was a weak positive correlation (r = .408, p < .001) (Figure 11).

Figure 11. Correlation of guessing accuracy from hearing non-signers in Experiment 2 (translation selection) and Experiment 4 (open cloze).

We examined the correlation between the guessing accuracy in the video selection task (Experiment 3) and the guessing accuracy of the open cloze task (Experiment 4). We found a very weak positive correlation between the selection of videos containing the target sign and correct open-ended guesses (r = .233, p < .001) (Figure 12).

Figure 12. Correlation of guessing accuracy between Experiment 3 (video selection) and Experiment 4 (open cloze).

With the results from HKSL now analyzed, we compare our results with iconicity ratings from American Sign Language (Sections 7.1 and 7.2), Israeli Sign Language (Sections 7.3 and 7.4) and British Sign Language (Section 7.5).

7. Comparing iconicity ratings between sign languages

HKSL signs were paired with the signs from American Sign Language (ASL), British Sign Language (BSL) and Israeli Sign Language (ISL) corresponding to the same English translations. Since the spoken-language translations of these signs may not neatly or fully map to signs with identical semantic representations, we manually checked the Cantonese and English translations for HKSL and ASL to best compensate for disparate semantic nuances and considered alternative translations where necessary to ensure that the compared signs convey equivalent core meanings. For BSL and ISL, however, we relied on the published dataset and were unable to independently verify BSL, ISL and Hebrew translations, which we note as a limitation.

7.1. Correlation of iconicity ratings between L1 HKSL Deaf signers and L1 ASL Deaf signers

Six hundred and thirty-four signs with synonymous written translations were identified between HKSL in the current study and ASL in Sehyr and Emmorey (Reference Sehyr and Emmorey2019). The iconicity ratings by HKSL Deaf signers were very weakly correlated with those by ASL Deaf signers (r = .285 with 95% CI[0.211,0.355], p < .001). As the data are not normally distributed (p < .001), Kendall’s tau-b correlation test was applied. The results also show a very weak positive correlation (τb = .193, p < .001). The distribution of ratings by HKSL Deaf signers was right-skewed, while that by ASL Deaf signers was left-skewed. There was a main effect of sign language (HKSL and ASL) on the iconicity ratings (F(1,632) = 55.7, p < .001). HKSL Deaf signers rated the signs as more iconic than ASL Deaf signers, t(1266) = −17.0, p < .001, d = −.954. Among Deaf signers, the average iconicity rating by HKSL Deaf signers was 3.84 (SD = 1.38), while ASL Deaf signers reported an average iconicity rating of 2.42 (SD = 1.59). Crucially, the results show that, although with a weak effect, signs being rated as more iconic by ASL Deaf signers were likely to be rated as more iconic by HKSL Deaf signers (Figure 13).

Figure 13. Correlation of HKSL and ASL Deaf signers’ iconicity ratings.

7.2. Correlation of iconicity ratings between L1 Cantonese hearing non-signers and L1 English hearing non-signers

The iconicity ratings between L1 Cantonese non-signers and L1 English non-signersFootnote 4 demonstrated a similar association. There is a weak correlation between the two groups (r = .566 with 95% CI[0.511,0.617], p < .001). Due to the non-normal distribution of the data (p < .001), Kendall’s tau-b correlation was employed and presented a weak positive correlation (τb = .389, p < .001). Both distributions of ratings by L1 Cantonese non-signers and L1 English non-signers were left-skewed. A main effect of sign language (HKSL and ASL) on the iconicity ratings (F(1,632) = 298, p < .001) was found. L1 Cantonese non-signers rated the signs as more iconic than L1 English non-signers, t(1266) = −3.76, p < .001, d = −.211. The average iconicity rating by L1 Cantonese non-signers was 3.20 (SD = 1.32), while L1 English non-signers reported an average iconicity rating of 2.89 (SD = 1.59). As with the signers’ data, results show that, although with a weak effect, signs being rated as more iconic by L1 Cantonese non-signers were likely to be rated as more iconic by L1 English non-signers (Figure 14).

Figure 14. Correlation of hearing non-signers’ iconicity ratings for HKSL and ASL.

7.3. Correlation of iconicity ratings between L1 HKSL Deaf signers and L1 ISL Deaf signers

One hundred and fifty-eight signs from HKSL and Israeli Sign Language (ISL) were found to have synonymous spoken-language translations between the current study and in Fuks (Reference Fuks2023). A weak correlation between the iconicity ratings by HKSL Deaf signers and those by ISL Deaf signers (r = .359 with 95% CI[0.214,0.487], p < .001). The data are not normally distributed (p = .007). Kendall’s tau-b correlation also shows a very weak positive correlation (τb = .252, p < .001). Both distributions of iconicity ratings by HKSL Deaf signers and ISL Deaf signers were right-skewed. There was a main effect of sign language (HKSL and ISL) on perceived iconicity (F(1,156) = 23.0, p < .001). ISL Deaf signers (M = 5.23, SD = 1.24) presented higher iconicity ratings than HKSL Deaf signers (M = 4.06, SD = 1.37), t(314) = −7.96, p < .001, d = −.895. Results suggest a weak effect for signs being rated as more iconic by ISL Deaf signers to have higher iconicity ratings from HKSL Deaf signers, with signs generally perceived as more iconic by ISL Deaf signers than by HKSL Deaf signers (Figure 15).

Figure 15. Correlation of HKSL and ISL Deaf signers’ iconicity ratings.

7.4. Correlation of iconicity ratings between L1 Cantonese hearing non-signers and L1 Hebrew hearing non-signers

Iconicity ratings by L1 Cantonese non-signers and L1 Hebrew non-signers demonstrated a stronger association than that among Deaf signers. The iconicity ratings between the groups were moderately correlated (r = .500 with 95% CI[0.373,0.609], p < .001). Due to the non-normal distribution of the data (p = .040), Kendall’s tau-b correlation was employed and presented a weak positive correlation (τb = .364, p < .001). The distribution of iconicity ratings by L1 Cantonese non-signers was left-skewed, while that by L1 Hebrew non-signers was right-skewed. There was a main effect of sign language (HKSL and ISL) on the perceived iconicity, F(1,156) = 52.1, p < .001. L1 Hebrew non-signers presented higher iconicity ratings (M = 4.69, SD = 1.52) than L1 Cantonese non-signer (M = 3.31, SD = 1.29), t(314) = −8.67, p < .001, d = −.975. Results suggest a moderate effect for signs to be rated as more iconic by L1 Cantonese non-signers when L1 Hebrew non-signers give higher iconicity ratings, with a generally more iconic perception of signs by L1 Hebrew non-signers (Figure 16).

Figure 16. Correlation of non-signers’ iconicity ratings of HKSL and ISL.

7.5. Correlation of iconicity ratings between L1 HKSL Deaf signers and L1 BSL Deaf signers

There were 99 signs with synonymous translations identified between HKSL in the current study and BSL in Vinson et al. (Reference Vinson, Cormier, Denmark, Schembri and Vigliocco2008).Footnote 5 The iconicity ratings by HKSL Deaf signers and those by BSL Deaf signers showed moderate correlation (r = .526 with 95% CI[0.367,0.655], p < .001). The distribution of ratings by HKSL Deaf signers was right-skewed, while that by BSL Deaf signers was left-skewed. Omnibus ANOVA test revealed a significant main effect of sign language (HKSL and BSL) on perceived iconicity of signs (F(1,97) = 37.1, p < .001). HKSL Deaf signers (M = 4.00, SD = 1.48) rated the signs as more iconic than BSL Deaf signers (M = 3.59, SD = 1.71), t(196) = −1.82, p = .071, d = −.258. Results suggested that signs being rated as more iconic by BSL Deaf signers are more likely to have a higher iconicity rating among HKSL Deaf signers, with HKSL Deaf signers generally perceiving higher iconicity than BSL Deaf signers (Figure 17).

Figure 17. Correlation of HKSL and BSL Deaf signers’ iconicity ratings.

8. Discussion

8.1. Summary of key findings

In this study, we explored the difference in iconicity perception between L1 HKSL Deaf signers and L1 Cantonese hearing non-signers. We then use the iconicity ratings to compare with those of other sign languages. We further investigated whether a high iconicity rating is correlated with higher guessing accuracy. We predicted that Deaf L1 HKSL signers would rate HKSL signs higher in iconicity than hearing non-signers would, aligning closely with findings from other sign languages. We also predicted a correlation between signs with higher iconicity ratings and improved guessing accuracy for those signs by hearing non-signers. Our prediction that higher iconicity facilitates recognition aligns with structure mapping accounts of iconicity, whereby image selection, schematization and encoding support form-meaning links in the manual-visual modality (Emmorey, Reference Emmorey2014; Taub, Reference Taub2001). Moreover, our multi-paradigm approach treats iconicity and transparency as graded properties of lexical items rather than binary attributes, consistent with recent syntheses (McLean et al., Reference McLean, Dunn and Dingemanse2023).

8.2. Iconicity perception across participant groups

Our HKSL iconicity ratings provided by Deaf L1 signers and hearing non-signers correlate with ratings from other sign languages, based on signs with synonymous English translations shared between our dataset (total 972 signs) and the corresponding language, that is, 634 signs from American Sign Language (ASL, Sehyr & Emmorey, Reference Sehyr and Emmorey2019), 99 from British Sign Language (BSL, Vinson et al., Reference Vinson, Cormier, Denmark, Schembri and Vigliocco2008) and 158 from Israeli Sign Language (ISL, Fuks, Reference Fuks2023). This indicates a level of convergence in the perception of iconicity for signs with synonymous translations from disparate languages.

L1 HKSL Deaf signers rating signs with more iconicity than L1 Cantonese hearing non-signers is a phenomenon that is also attested for ISL and German Sign Language (DGS), but not ASL (Fuks, Reference Fuks2023; Sehyr & Emmorey, Reference Sehyr and Emmorey2019; Trettenbrein et al., Reference Trettenbrein, Pendzich, Cramer, Steinbach and Zaccarella2021). Historic factors may have contributed to this dissimilarity: HKSL is a much younger sign language than ASL. Therefore, signs may have undergone fewer mutations and thus retain a more iconic form. Future work should investigate the correlation between the age of a sign language and the perceived iconicity of its signs.

8.3. Iconicity perception across participant groups

Our participants overall rated HKSL signs comparably to iconicity ratings reported for other studies. The L1 Cantonese hearing non-signers in our study, however, tended to rate verbs and acting signs (i.e., reenactment of corporal movement) with more iconicity than nouns and other sign types – as was found for ASL (Sehyr & Emmorey, Reference Sehyr and Emmorey2019). This hearing non-signer tendency to rate verbs and acting signs as more iconic supports findings from signing and gestural perception and production studies with non-signer participants (Müller, Reference Müller, Müller, Cienki, Fricke, Ladewig, McNeill and Bressem2014, Reference Müller, Zlatev, Sonesson and Piotr2016; Ortega et al., Reference Ortega, Schiefner and Özyürek2019, Reference Ortega, Özyürek and Peeters2020). Our findings further suggest that this tendency for hearing non-signers to assume signs denote actions is a cross-linguistic, cross-cultural phenomenon. This tendency would also imply that, aside from possessing a relatively limited manual-gestural semiotic repertoire, non-signers may associate signs with the movements made when executing a physical action or maneuver. Hearing non-signers’ primary reliance on the spoken language may lead them to generalize that physical movements represent those that are inherent to certain physical actions.

8.4. Relationship between iconicity and semantic transparency

Generally, if L1 HKSL Deaf signers rated a sign as highly iconic or not so iconic, then L1 Cantonese hearing non-signers were likely to do so too. Moreover, signs (n = 29) which received the highest iconicity ratings from Deaf signers, for example, WALK, FAT, RAIN, GRANDMOTHER, were guessed at chance level by hearing non-signers (in Experiment 2). As with previous studies (Fuks, Reference Fuks2023; Ortega et al., Reference Ortega, Özyürek and Peeters2020; Sehyr & Emmorey, Reference Sehyr and Emmorey2019), hearing non-signers’ open-ended guesses, collected in an open cloze task (Experiment 4), were hardly ever accurate; that is, out of 15,228 open-ended guesses made for 972 HKSL signs, some 2,183 (14.3%) guesses were accurate. Yet, given the open-ended nature of this guessing task and the range of signs presented, it is quite remarkable that our participants achieved such a consensus at all. Accurate open cloze guesses corresponded with higher iconicity ratings, for example, WALK (Deaf = 6, hearing = 5.79), FAT (Deaf = 6, hearing = 5.47), RAIN (Deaf = 6, hearing = 5.45), a trend also found for ASL and ISL (Fuks, Reference Fuks2023; Sehyr & Emmorey, Reference Sehyr and Emmorey2019). See https://osf.io/pnbdv for the highest and lowest performing signs per task.

8.5. Insights from guessing paradigms

At this point, it would seem then that, although iconicity is a gradient feature of language, distributed throughout a lexicon in various shades and densities (see Hinojosa et al., Reference Hinojosa, Haro, Magallares, Duñabeitia and Ferré2021; Perry et al., Reference Perry, Perlman and Lupyan2015; A. L. Thompson et al., Reference Thompson, Akita and Do2020; B. Thompson et al., Reference Thompson, Perlman, Lupyan, Sehyr and Emmorey2020; Winter et al., Reference Winter, Perlman, Perry and Lupyan2017 for spoken languages), semantic transparency is sequestered to extremely exceptional or outlier cases, for example, signs guessed accurately in the open-ended guessing task (Experiment 4). Klima and Bellugi (Reference Klima and Bellugi1979), however, categorized ASL into opaque, translucent and transparent signs based on how well non-signers were able to infer their meanings. Results from our three-alternative forced choice (3AFC) guessing paradigms, that is, translation selection (Experiment 2) and video selection (Experiment 3), reveal that semantic transparency is indeed gradient, dependent on the experimental design (i.e., what world knowledge is inferable from the answer choices and how this knowledge can be used to correctly deduce the meaning of the stimulus). In the case of the translation selection paradigm (Experiment 2), such world knowledge was indirectly embedded in the form of three Cantonese words serving as answer choices: two of which were close in proximity within a semantic space, and another of which was randomly selected from the set of all sign translations and matched for part of speech with the two other items. In the case of the video selection paradigm (Experiment 3), such world knowledge was provided in the form of three HKSL signs presented as answer choices (all articulated with the same handshape), while the two incorrect answer choices were rated highly and not highly for iconicity, respectively. Correct answers were chosen 72.9% of all the signs shown in Experiment 2 and 89.33% of all the signs shown in Experiment 3.

8.6. Methodological considerations

The cross-modal nature of our guessing paradigms allows us to circumvent the empirical confound of form overlap, as encountered in monomodal guessing tasks. Form overlap is when the stimulus and answer choices have characteristics (e.g., high vowels) in common, which could influence participants’ responses. Form overlap is mitigated through the designation of multiple target answer choices in conjunction with randomization (McLean et al., Reference McLean, Dunn and Dingemanse2023). But factors like word length and syllable structure impose limitations on such mitigations, making form overlap difficult to avoid. Form overlap could explain why participants have been shown to learn both foil meanings and true meanings equally well for iconic words from unfamiliar languages (Dingemanse et al., Reference Dingemanse, Schuerman, Reinisch, Tufvesson and Mitterer2016; Nygaard et al., Reference Nygaard, Herold and Namy2009; Van Hoey et al., Reference Van Hoey, Thompson, Do and Dingemanse2023), for example, for the Japanese ideophone kira-kira sparkling-TRUE was remembered just as well as dark-FOIL (overlap = /k/, /a/, /r/, /i/ present in English translations and Japanese form: sparkling versus kira-kira). One way to rule out form overlap would be to carry out open cloze tests for spoken language words rated differently in terms of iconicity, for example, ‘What do you think kaori means?’ However, it is not clear if those results would yield anything readily interpretable.

Incorrect guesses from our open cloze task (Experiment 4) show us that participants rely on movement to infer meaning when there is little context to work with (i.e., when the sign is in isolation). For example, the trajectory of movement for the following signs can be inferred from the various guesses provided. EARN yielded zero accurate responses yet received erroneous responses that consistently entail movement like ‘come back’, ‘push back’, ‘enter’, ‘clasp’ and ‘pull’. SIGN LANGUAGE yielded zero accurate responses, yet received erroneous responses like ‘move forward’, ‘rub’, ‘caress’ and ‘stroke’. As mentioned in the Introduction, movement in the signal is understood as movement literally, as Fuks (Reference Fuks2023, p. 65) observed in her global analysis of open cloze answers for 98 ISL signs. Relatedly, when looking at the signs by their ARTO schema coding (Section 7.2), we found that participants rate acting signs (‘A’) higher in iconicity for HKSL and ASL (Sehyr & Emmorey, Reference Sehyr and Emmorey2019, p. 218) due to participants’ familiarity with the visual signal as corresponding to embodied movement, as we earlier observed when comparing iconicity ratings between Deaf signers and non-signers (Section 8.3). Interestingly, for iconicity in spoken language, A. L. Thompson et al. (Reference Thompson, Van Hoey and Do2021) note that aspects of the acoustic signal attributable to articulatory movement map well to movement-related meanings of ideophones from several languages. Movement is a baseline from which little context goes a long way.

8.7. Theoretical implications

When context clues are embedded within the design of the experimental paradigm (i.e., via the answer choices available for hearing non-signer participants to choose from), we see that the potential of iconicity is almost endless: a wide range of meanings is inferable, or at least not opaque, to those outside the linguistic system in question. Our 3AFC guessing studies (Sections 4 and 5) reveal just how ‘far’, semantically, hearing non-signer participants were able to extrapolate away from the baseline of directly observable physical movement, and thus infer and differentiate between finer layers of contrastive meaning (than those to do with just literal physical movement). For example, participants were able to correctly identify WALK from HIKE, CONGRATULATIONS from MARRIAGE and YEAR from MONTH. A feat which is perhaps impossible for spoken language due to the confounds discussed above, in addition to the limited range of semantic categories covered by lexicalized imitative instances of spoken language (Dingemanse, Reference Dingemanse2012; Hoey, Reference Hoey2022; McLean, Reference McLean2021).

For cross-cultural factors, our study echoes the findings of Pizzuto and Volterra (Reference Pizzuto and Volterra2000) in signs such as DRINK and SLEEP, although our results are not strictly comparable due to the different methodologies and participant groups. On the other hand, the need for the orthographic iconic strategy (‘O’ of the ARTO schema), the cross-cultural differences of which are noted in Fischer and Gong (Reference Fischer and Gong2011), suggests that the writing system is a significant influence on iconicity and should warrant its own line of inquiry.

8.8. Limitations

There are several limitations to the current study. First, the small Deaf sample reflects Hong Kong’s demographics (Census and Statistics Department, 2021), which hinders further fine-grained analysis. Second, we blurred mouthing to avoid lexical ‘leakage’ in the guessing tasks, which inevitably detracts from a complete and faithful presentation of the sign. Third, cross-language comparisons relied on published ASL-LEX mappings and external datasets, with limited ability to independently verify BSL/ISL translation senses (Fuks, Reference Fuks2023; Sehyr et al., Reference Sehyr, Caselli, Cohen-Goldberg and Emmorey2021; Vinson et al., Reference Vinson, Cormier, Denmark, Schembri and Vigliocco2008); as discussed, the use of translations to ‘bridge’ between sign languages provides limited insights into cross-cultural analysis. Finally, the study only investigated iconic strategies for items that only have one ARTO-coding; further investigation is needed for signs with complex ARTO schema coding.

9. Conclusion

As for the existence of shared forms between sign languages, iconicity-focused norms are a good starting point for investigating semiotic convergence across unrelated languages. A few questions come to mind: Are some meanings more likely to be iconic than others? Are similar forms, for example, handshape (Occhino, Reference Occhino2017; Zwitserlood et al., Reference Zwitserlood, van der Kooij and Crasborn2023), or similar strategies, for example, handling (Padden et al., Reference Padden, Meir, Hwang, Lepic, Seegers and Sampson2013, Reference Padden, Hwang, Lepic and Seegers2015), employed to convey such meanings across disparate sign languages? How does semantic transparency vary according to form or strategy across participant groups (Lepic et al., Reference Lepic, Börstell, Belsitzman and Sandler2016; Ortega et al., Reference Ortega, Özyürek and Peeters2020; van Nispen et al., Reference van Nispen, van de Sandt-Koenderman and Krahmer2017), for example, Deaf signers of ASL guessing HKSL sign meanings versus hearing non-signers guessing HKSL sign meanings. Such questions are left to subsequent research. Only by establishing more norms for more sign languages can we begin investigating the convergence of semiotic strategies on a grander scale, which will then inform our understanding of how pidgin sign languages (e.g., International Sign, McKee & Napier, Reference McKee and Napier2002; Supalla & Webb, Reference Supalla and Webb1995) and new sign languages spontaneously emerge. Moreover, by implementing more forced-choice guessing paradigms, we can gauge the extent to which translucent iconicity (a.k.a. secondary iconicity, Sonesson, Reference Sonesson, Rauch and Carr1996, p. 2) permeates language as a semiotic strategy. Finally, over the course of conducting the experiments that make up this paper, our Deaf L1 HKSL collaborators continually showed us that, by limiting the design of our experimental materials to one HKSL sign per meaning, we have just barely scratched the phonological surface of the language – a surface which appears rich in variation and dynamism due to, perhaps, on the one hand, lack of standardization and official recognition, but, on the other hand, also due to the communicative adaptability and steadfast resilience of its signers.

Data availability statement

The data that support the findings of this study are openly available on OSF at https://doi.org/10.17605/OSF.IO/5VXZF.

Footnotes

1 We define asymmetry as two-handed signs where the dominant and non-dominant hands are not mirror images of each other, see Figure 3.

2 Only signs rated as highly iconic (≥4 on the 0–6 scale) were selected to avoid floor effects. Experiment 2 explores the semantic transparency of signs, rather than to simply test whether participants can guess meanings of arbitrary signs.

3 The confidence scale was shifted from 1–7 in Sehyr and Emmorey (Reference Sehyr and Emmorey2019) to 0–6 to retain seven response options while using a zero-based scale for clearer interpretation and statistical convenience.

4 The L1 English data come from Sehyr and Emmorey (Reference Sehyr and Emmorey2019): Hearing monolingual English speakers with no prior knowledge of any sign language rated 991 ASL signs for iconicity on a scale of 1 to 7 after being shown the English translation (21–37 participants per sign). All reported normal or corrected-to-normal vision. Demographic details and participation restrictions were not reported.

5 Vinson et al. (Reference Vinson, Cormier, Denmark, Schembri and Vigliocco2008) did not collect iconicity ratings from hearing non-signers.

References

Caselli, N. K., & Pyers, J. E. (2017). The road to language learning is not entirely iconic: Iconicity, neighborhood density, and frequency facilitate acquisition of sign language. Psychological Science, 28(7), 979987. https://doi.org/10.1177/0956797617700498.CrossRefGoogle Scholar
Census and Statistics Department, Hong Kong Special Administrative Region. (2021). Social data collected via the general household survey: Special topics report—report no.63—Persons with disabilities and chronic diseases (Statistical Reports No. 63). Census and Statistics Department. https://www.censtatd.gov.hk/en/EIndexbySubject.html?pcode=B1130121&scode=453#section1Google Scholar
Dingemanse, M. (2012). Advances in the cross-linguistic study of ideophones. Language and Linguistics Compass, 6, (10), Article 10. https://doi.org/10.1002/lnc3.361.CrossRefGoogle Scholar
Dingemanse, M., Schuerman, W., Reinisch, E., Tufvesson, S., & Mitterer, H. (2016). What sound symbolism can and cannot do: Testing the iconicity of ideophones from five languages. Language, 92(2), e117e133. https://doi.org/10.1353/lan.2016.0034.CrossRefGoogle Scholar
Emmorey, K. (2014). Iconicity as structure mapping. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1651), Article 1651. https://doi.org/10.1098/rstb.2013.0301CrossRefGoogle ScholarPubMed
Fischer, S., & Gong, Q. (2011). Marked hand configurations in Asian sign languages. In Formational units in sign languages (pp. 1942). De Gruyter Mouton. https://doi.org/10.1515/9781614510680.19.Google Scholar
Fuks, O. (2023). Iconicity perception under the lens of iconicity rating and transparency tasks in Israeli sign language (ISL). Sign Language Studies, 24(1), 4692. https://muse.jhu.edu/pub/18/article/912330 10.1353/sls.2023.a912330CrossRefGoogle Scholar
Hinojosa, J. A., Haro, J., Magallares, S., Duñabeitia, J. A., & Ferré, P. (2021). Iconicity ratings for 10,995 Spanish words and their relationship with psycholinguistic variables. Behavior Research Methods, 53(3), 12621275. https://doi.org/10.3758/s13428-020-01496-z.CrossRefGoogle ScholarPubMed
Hodge, G., & Ferrara, L. (2022). Iconicity as multimodal, Polysemiotic, and Plurifunctional. Frontiers in Psychology, 13, 808896. https://doi.org/10.3389/fpsyg.2022.808896CrossRefGoogle ScholarPubMed
Hoey, T. V. (2022). A semantic map for ideophones. OSF. https://doi.org/10.31219/osf.io/muhpd.Google Scholar
Hwang, S.-O., Tomita, N., Morgan, H., Ergin, R., İlkbaşaran, D., Seegers, S., Lepic, R., & Padden, C. (2017). Of the body and the hands: Patterned iconicity for semantic categories. Language and Cognition, 9(4), 573602. https://doi.org/10.1017/langcog.2016.28.CrossRefGoogle Scholar
Klima, E. S., & Bellugi, U. (1979). The signs of language. Harvard University Press.Google Scholar
Lepic, R., Börstell, C., Belsitzman, G., & Sandler, W. (2016). Taking meaning in hand: Iconic motivations in two-handed signs. Sign Language & Linguistics, 19(1), 3781. https://doi.org/10.1075/sll.19.1.02lep.CrossRefGoogle Scholar
Mak, J., & Tang, G. (2011). Movement types, repetition, and feature Organization in Hong Kong Sign Language. In Channon, R. & van der Hulst, H. (Eds.), Formational units in sign languages (pp. 315338). De Gruyter Mouton.Google Scholar
McKee, R. L., & Napier, J. (2002). Interpreting into international sign pidgin: An analysis. Sign Language & Linguistics, 5(1), 2754. https://doi.org/10.1075/sll.5.1.04mck.CrossRefGoogle Scholar
McLean, B. (2021). Revising an implicational hierarchy for the meanings of ideophones, with special reference to Japonic. Linguistic Typology, 25(3), 507549. https://doi.org/10.1515/lingty-2020-2063.CrossRefGoogle Scholar
McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 124. https://doi.org/10.1017/langcog.2023.9.Google Scholar
Müller, C. (2014). Gestural modes of representation as techniques of depiction. In Müller, C., Cienki, A., Fricke, E., Ladewig, S. H., McNeill, D., & Bressem, J. (Eds.), Body—language—communication (Vol. 2, pp. 16871702). De Gruyter Mouton. https://doi.org/10.1515/9783110302028.1687.Google Scholar
Müller, C. (2016). From mimesis to meaning: A systematics of gestural mimesis for concrete and abstract referential gestures. In Zlatev, J., Sonesson, G., & Piotr, K. (Eds.), Meaning, Mind and Communication: Explorations in Cognitive Semiotics (pp. 211226). Frankfurt am Main, Germany: Peter Lang.Google Scholar
Nygaard, L. C., Herold, D. S., & Namy, L. L. (2009). The semantics of prosody: Acoustic and perceptual evidence of prosodic correlates to word meaning. Cognitive Science, 33(1), 127146. https://doi.org/10.1111/j.1551-6709.2008.01007.x.CrossRefGoogle ScholarPubMed
Occhino, C. (2017). An introduction to embodied cognitive phonology: Claw-5 hand-shape distribution in ASL and Libras. Complutense Journal of English Studies, 25, 69103. https://doi.org/10.5209/CJES.57198.CrossRefGoogle Scholar
Occhino, C., Anible, B., Wilkinson, E., & Morford, J. P. (2017). Iconicity is in the eye of the beholder: How language experience affects perceived iconicity. Gesture, 16(1), 100126. https://doi.org/10.1075/gest.16.1.04occ.CrossRefGoogle Scholar
Ortega, G., Özyürek, A., & Peeters, D. (2020). Iconic gestures serve as manual cognates in hearing second language learners of a sign language: An ERP study. Journal of Experimental Psychology: Learning, Memory, and Cognition, 46(3), 403415. https://doi.org/10.1037/xlm0000729.Google ScholarPubMed
Ortega, G., Schiefner, A., & Özyürek, A. (2019). Hearing non-signers use their gestures to predict iconic form-meaning mappings at first exposure to signs. Cognition, 191, 103996. https://doi.org/10.1016/j.cognition.2019.06.008.CrossRefGoogle ScholarPubMed
Padden, C. A., Hwang, S.-O., Lepic, R., & Seegers, S. (2015). Tools for language: Patterned iconicity in sign language nouns and verbs. Topics in Cognitive Science, 7(1), 8194. https://doi.org/10.1111/tops.12121.CrossRefGoogle ScholarPubMed
Padden, C. A., Meir, I., Hwang, S.-O., Lepic, R., Seegers, S., & Sampson, T. (2013). Patterned iconicity in sign language lexicons. Gesture, 13(3), 287308. https://doi.org/10.1075/gest.13.3.03pad.CrossRefGoogle Scholar
Perry, L. K., Perlman, M., & Lupyan, G. (2015). Iconicity in English and Spanish and its relation to lexical category and age of acquisition. PLoS One, 10(9), e0137147. https://doi.org/10.1371/journal.pone.0137147.CrossRefGoogle ScholarPubMed
Pizzuto, E., & Volterra, V. (2000). Iconicity and transparency in sign languages: A cross-linguistic cross-cultural view. In The signs of language revisited: An anthology to honor Ursula Bellugi and Edward Klima (pp. 261286). Lawrence Erlbaum Associates Publishers.Google Scholar
Qualtrics. (n.d.). Qualtrics (Version 202205) [Computer software]. Qualtrics. http://www.qualtrics.comGoogle Scholar
Řehůřek, R., & Sojka, P. (2010). Software framework for topic modelling with large corpora. In Proceedings of the LREC 2010 workshop on new challenges for NLP frameworks, 4550. http://is.muni.cz/publication/884893/enGoogle Scholar
Sandler, W. (2008). The syllable in sign language: Considering the other natural language modality. https://www.academia.edu/64463702/The_Syllable_in_Sign_Language_Considering_the_Other_Natural_Language_ModalityGoogle Scholar
Sehyr, Z. S., Caselli, N., Cohen-Goldberg, A. M., & Emmorey, K. (2021). The ASL-LEX 2.0 project: A database of lexical and phonological properties for 2,723 signs in American sign language. The Journal of Deaf Studies and Deaf Education, 26(2), 263277. https://doi.org/10.1093/deafed/enaa038.CrossRefGoogle Scholar
Sehyr, Z. S., & Emmorey, K. (2019). The perceived mapping between form and meaning in American sign language depends on linguistic knowledge and task: Evidence from iconicity and transparency judgments. Language and Cognition, 11(2), 2. https://doi.org/10.1017/langcog.2019.18.Google ScholarPubMed
Sonesson, G. (1996). The ecological foundations of iconicity. In Rauch, I. & Carr, G. F. (Eds.), Semiotics around the world: Synthesis in diversity: Proceedings of the fifth congress of the International Association for Semiotic Studies, Berkeley 1994 (Vol. 2, pp. 739742). Mouton de Gruyter.Google Scholar
Supalla, T., & Webb, R. (1995). The grammar of international sign: A new look at pidgin languages. In Language, gesture, and space (pp. 333352). Lawrence Erlbaum.Google Scholar
Sze, F., Lo, C., Lo, L., & Chu, K. (2013). Historical development of Hong Kong sign language. Sign Language Studies, 13(2), Article 2. https://www.jstor.org/stable/26190842 10.1353/sls.2013.0002CrossRefGoogle Scholar
Taub, S. F. (2001). Language from the body: Iconicity and metaphor in American sign language (pp. xvi, 256). Cambridge University Press. https://doi.org/10.1017/CBO9780511509629CrossRefGoogle Scholar
The Centre for Sign Linguistics and Deaf Studies, The Chinese University of Hong Kong. (2018). Hong Kong sign language browser [Dataset]. http://www.cslds.org/hkslbrowser/index.jspGoogle Scholar
Thompson, A. L., Akita, K., & Do, Y. (2020). Iconicity ratings across the Japanese lexicon: A comparative study with English. Linguistics Vanguard, 6(1). https://doi.org/10.1515/lingvan-2019-0088.CrossRefGoogle Scholar
Thompson, A. L., Van Hoey, T., & Do, Y. (2021). Articulatory features of phonemes pattern to iconic meanings: Evidence from cross-linguistic ideophones. Cognitive Linguistics, 32(4), 563608. https://doi.org/10.1515/cog-2020-0055CrossRefGoogle Scholar
Thompson, B., Perlman, M., Lupyan, G., Sehyr, Z. S., & Emmorey, K. (2020). A data-driven approach to the semantics of iconicity in American sign language and English. Language and Cognition, 12(1), 182202. https://doi.org/10.1017/langcog.2019.52.CrossRefGoogle Scholar
Thompson, R. L., Vinson, D. P., Woll, B., & Vigliocco, G. (2012). The road to language learning is iconic: Evidence from British sign language. Psychological Science, 23(12), 14431448. https://doi.org/10.1177/0956797612459763.CrossRefGoogle ScholarPubMed
Trettenbrein, P. C., Pendzich, N.-K., Cramer, J.-M., Steinbach, M., & Zaccarella, E. (2021). Psycholinguistic norms for more than 300 lexical signs in German sign language (DGS). Behavior Research Methods, 53(5), 18171832. https://doi.org/10.3758/s13428-020-01524-y.CrossRefGoogle ScholarPubMed
Van Hoey, T., Thompson, A. L., Do, Y., & Dingemanse, M. (2023). Iconicity in Ideophones: Guessing, memorizing, and reassessing. Cognitive Science, 47(4), e13268. https://doi.org/10.1111/cogs.13268.CrossRefGoogle ScholarPubMed
van Nispen, K., van de Sandt-Koenderman, W. M. E., & Krahmer, E. (2017). Production and comprehension of pantomimes used to depict objects. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.01095.CrossRefGoogle ScholarPubMed
Vinson, D. P., Cormier, K., Denmark, T., Schembri, A., & Vigliocco, G. (2008). The British sign language (BSL) norms for age of acquisition, familiarity, and iconicity. Behavior Research Methods, 40(4), 10791087. https://doi.org/10.3758/BRM.40.4.1079.CrossRefGoogle ScholarPubMed
Winter, B., Perlman, M., Perry, L. K., & Lupyan, G. (2017). Which words are most iconic?: Iconicity in English sensory words. Interaction Studies, 18(3), 443464. https://doi.org/10.1075/is.18.3.07win.CrossRefGoogle Scholar
Woodward, J. (1993). Intuitive judgments of Hong Kong signers about the relationship of sign language varieties in Hong Kong and Shanghai. In CUHK Papers in Linguistics. https://eric.ed.gov/?id=ED363110Google Scholar
Words.hk . (n.d.). 粵典 words.hk. https://words.hk/base/about/Google Scholar
Yang, J. (2015). Chinese sign language. In Jepsen, J. B., De Clerck, G., Lutalo-Kiingi, S., & McGregor, W. B. (Eds.), Sign languages of the world (pp. 177194). De Gruyter Mouton. https://doi.org/10.1515/9781614518174-012.CrossRefGoogle Scholar
Zwitserlood, I. E. P., van der Kooij, E., & Crasborn, O. A. (2023). Units of sub-sign meaning in NGT. 322. https://doi.org/10.1075/sll.20009.vanCrossRefGoogle Scholar
Figure 0

Figure 1. Example of objects represented in Representing signs.Note: Red highlights the handshape element of interest. From left to right: OUTSIDE 外 (Representing: a wall [the non-dominant hand]); FLOWER 花 (Representing: flower petals).

Figure 1

Figure 2. Examples of iconic strategies in signs according to ARTO schema.Note: Red highlights the handshape element of interest. From top left to bottom right: KEY 鎖匙 (Acting: unlocking with a held key); OUTSIDE 外 (Representing: a wall [the non-dominant hand]); PIPE 管 (Tracing: outline of a pipe); PERSON 人 (Orthography: the Chinese character 人 ‘person’ from the signer’s perspective); PAIN 痛 (No iconic strategy present in handshape).

Figure 2

Table 1. Iconicity strategies used to code all signs in the HKSL database following the ARTO schema

Figure 3

Figure 3. ARTO schema coding of a compound sign and an asymmetrical sign.Note: Left: UNIVERSITY 大學 is a compound sign of two signs; firstly the Chinese character 大 ‘big’ (Orthographic) and secondly the sign for READ/STUDY 讀 (Acting) articulated (‘mouthed’) here with open and unrounded lips corresponding to the unrounded vowel in the second syllable of the Cantonese word 大學 /tai˨ hɔk˨/ ‘university’ (whereas the same sign articulated with pursed lips is used for the monomorphemic form of READ/STUDY corresponding to the rounded vowel of the Cantonese word 書 /sy˥/ ‘book’). Right: WARN 警吿 is an asymmetrical sign; the dominant hand (red) depicts the action of warning (uppercase, Acting) and the non-dominant hand (blue) depicts a placeholder person (lowercase, Representing). These two signs are coded ‘O,A’ and ‘Ar’, respectively, as denoted in the top left corner.

Figure 4

Figure 4. Experiment 1 layout (hearing participants to the left; Deaf participants to the right).Note: The line of translation presented to the hearing participants can be found beneath the video. For better Deaf accessibility, the text size was enlarged, and complex phrases were segmented with spaces. Translations: ‘In the video, the sign means […]. Do you think the sign resembles its meaning?’

Figure 5

Figure 5. Iconicity rating distribution for Experiment 1.

Figure 6

Figure 6. Comparison of iconicity rating with ARTO schema codings.Note: Error bars showing standard deviation.

Figure 7

Figure 7. Experiment 2 layout.Note: Signer’s lips are blurred to conceal any mouth movements which may or may not correspond to the Cantonese.

Figure 8

Figure 8. Experiment 3 layout.Note: Signer’s lips are blurred to conceal any mouth movements which may or may not correspond to the Cantonese.

Figure 9

Figure 9. Experiment 4 layout.Note: Left: the participant is asked to enter their guess. Right: repeating the input guess, the participant is asked to rate how confident they are about their guess from a scale of 0–6.

Figure 10

Figure 10. Correlation of iconicity ratings from hearing non-signers in Experiment 1 and guessing accuracy for the open cloze task in Experiment 4.

Figure 11

Figure 11. Correlation of guessing accuracy from hearing non-signers in Experiment 2 (translation selection) and Experiment 4 (open cloze).

Figure 12

Figure 12. Correlation of guessing accuracy between Experiment 3 (video selection) and Experiment 4 (open cloze).

Figure 13

Figure 13. Correlation of HKSL and ASL Deaf signers’ iconicity ratings.

Figure 14

Figure 14. Correlation of hearing non-signers’ iconicity ratings for HKSL and ASL.

Figure 15

Figure 15. Correlation of HKSL and ISL Deaf signers’ iconicity ratings.

Figure 16

Figure 16. Correlation of non-signers’ iconicity ratings of HKSL and ISL.

Figure 17

Figure 17. Correlation of HKSL and BSL Deaf signers’ iconicity ratings.