Hostname: page-component-89b8bd64d-b5k59 Total loading time: 0 Render date: 2026-05-06T20:00:34.359Z Has data issue: false hasContentIssue false

Using depiction for efficient communication in LIS (Italian Sign Language)

Published online by Cambridge University Press:  19 July 2021

ANITA SLONIMSKA*
Affiliation:
Radboud University, Centre for Language Studies, The Netherlands
ASLI ÖZYÜREK
Affiliation:
Radboud University, Centre for Language Studies, The Netherlands, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands, and Donders Institute for Brain, Cognition & Behaviour, Nijmegen, The Netherlands
OLGA CAPIRCI
Affiliation:
Institute of Cognitive Sciences and Technologies (ISTC), National Research Council (CNR) of Italy, Rome RM, Italy
*
Address for correspondence: e-mail: a.s.slonimska@let.ru.nl.
Rights & Permissions [Opens in a new window]

Abstract

Meanings communicated with depictions constitute an integral part of how speakers and signers actually use language (Clark, 2016). Recent studies have argued that, in sign languages, depicting strategy like constructed action (CA), in which a signer enacts the referent, is used for referential purposes in narratives. Here, we tested the referential function of CA in a more controlled experimental setting and outside narrative context. Given the iconic properties of CA we hypothesized that this strategy could be used for efficient information transmission. Thus, we asked if use of CA increased with the increase in the information required to be communicated. Twenty-three deaf signers of LIS described unconnected images, which varied in the amount of information represented, to another player in a director–matcher game. Results revealed that participants used CA to communicate core information about the images and also increased the use of CA as images became informatively denser. The findings show that iconic features of CA can be used for referential function in addition to its depictive function outside narrative context and to achieve communicative efficiency.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press
Figure 0

Fig. 1. Diagrammatic properties of CA when encoding relations between two referents and their interaction.

Figure 1

Fig. 2. Stimuli of the images representing the event of various semantic information density levels. Levels 1–2 are in JPG format, and Levels 3–5 are in GIF format where only dynamic action is animated.

Figure 2

Fig. 3. Example of the segmentation of a single stimulus with 5 MS and coding of linguistic strategy used in each MS.

Figure 3

Fig. 4. Raw proportions of linguistic strategies used to encode a stimulus in each density level.

Figure 4

Table 1. Best fit model in a logit scale (model fit by maximum likelihood, Laplace Approximation) regarding the proportion of lexical units used for encoding. Contrasts reflect pairwise comparisons between Level 1 and all other levels.

Figure 5

Table 2. Best fit model in a logit scale (model fit by maximum likelihood, Laplace Approximation) regarding the proportion of CA used for encoding. Contrasts reflect pairwise comparisons between Level 1 and all other levels.

Figure 6

Fig. 5. A signer depicting Referent 1 – woman (encoded through head direction, facial expression, and eye-gaze) and the static action (the signer's right hand) via CA (Level 2).

Figure 7

Fig. 6. A signer depicting Referent 1 – bear (encoded through torso, head, eye-gaze, and facial expression of the signer), his static action (the signer's right hand), and dynamic action 1 – caressing (the signer's left hand) via CA (Level).

Figure 8

Fig. 7. A signer depicting Referent 2 – bunny (encoded through torso, head, eye-gaze, and facial expression of the signer) and dynamic action 2 – tapping (the signer’s right hand) via CA (Level 4).

Figure 9

Fig. 8. A signer depicting Referent 1 – bird (encoded through torso, head, facial expression) and its static action (the signer’s left hand) and Active action of Referent 2 (the signer’s right hand) via CA (Level 4).

Figure 10

Table 3. Best fit model in a logit scale (model fit by maximum likelihood, Laplace Approximation) regarding the proportion of combined strategies used for encoding. Contrasts reflect pairwise comparisons between Level 1 and all other levels.

Figure 11

Fig. 9. Raw proportions of the linguistic strategy combinations used for encoding a stimulus in each level.

Figure 12

Fig. 10. A signer encoding Referent 2 –bird (encoded through the torso, head, eye-gaze, and facial expression of the signer) and dynamic action 2 –pecking (the signer’s right hand) with depicting construction (Level 4).

Figure 13

Fig. 11. A signer encoding Referent 1 – dog (encoded through the torso, head, and eye-gaze of the signer) and holding action (the signer’s left hand) via CA and dynamic action 2 – pecking (the signer’s right hand) with a depicting construction (Level 4).