The puzzle of ideography is, Morin argues, that in the history of written sign systems, there seem to be no self-sufficient, generalist ideographic codes. Ideographic codes are sign systems that represent ideas without the mediation of an auxiliary code. Most writing systems are generalist because they can be used for a wide range of communicative functions. However, they are not ideographic because they are glottographic: They represent semantic contents only by representing the sounds of words in spoken languages. Without knowledge of the relevant spoken languages, written languages remain uninterpretable. Although ideographic codes exist, their use is restricted to specific and specialized contexts. For example, emojis and musical notation are self-sufficient codes, because they encode meaningful units without the need for external mediation. However, their use is restricted to the expression of simple emotions and music – making their use of limited generality.
Morin's question is why generalist ideographic codes, despite being conceptually possible, have not developed. He rejects one answer to this question, which proposes that ideographic codes are more difficult to learn than nonideographic ones (the “learnability problem”) on the grounds that it underestimates human capacity for learning. Instead he defends a “standardization problem”: The problem with ideographic codes is getting everyone to agree on which code to use. In face-to-face communication, he argues, standardization issues do not arise because people can give and receive real-time feedback to clarify which sign system they are using. However, because ideographic codes are used for communicating with temporally and spatially distant interlocutors, this opportunity for such feedback is missing, and standardization remains a problem.
Morin's account gets something right – namely, the interpretability of ideographic signs. However, this problem of interpretability is not best characterized as a problem of standardization. Standardization would not be an insurmountable challenge if generalist ideographic languages were independently viable. Even if ideographic codes are largely used in distal communication, a viable, generalist ideographic code could be learned and used in face-to-face interaction, with opportunities for face-to-face repair, and in such cases a standardization challenge could be overcome. In that case, the problem with ideographic codes must be more fundamental. We think the underlying issue is not standardization of learnability, but pragmatic interpretation.
As proponents of both the learnability and standardization views agree, ideographic codes are limited. In Morin's words, both explanations agree that “graphic codes cannot encode a broad range of meanings” (target article, sect. 5, para. 1). We interpret this claim as consensus that, in contrast to glottographic codes that exploit the countless combinatorial possibilities of representing phonemes, there may be a comparatively small number of ideographic signs that can be both efficiently reproduced and used to represent ideas in a reasonably unambiguous way. This consensus supports a fairly crude and empirically untested hypothesis, which we nonetheless find intuitive.
Untested hypothesis: There is a limit to the complexity of the messages that ideographic codes can be used to express before they become either unwieldy or uninterpretable. This is a consequence of the very format of ideographic representation, which requires (something akin to) drawing skills and the production of more complex symbols. The limited use of ideographic codes therefore stems from an intrinsic feature (their ideographic format), rather than a contingent factor (the fact they are mostly used asynchronously). This is the main reason for the nonexistence of generalist ideographic codes.
If we are right, then the users of ideographic codes must at some point attempt to plot a course between Scylla and Charybdis. Scylla is the threat of unwieldiness; Charybdis the problem of ambiguity. As messages get more complex, the users of ideographic codes face a choice. Either they must produce more elaborate, and more detailed ideographic codes sufficient for the visual discrimination of similar but potentially important semantic differences; or they must accept the limited expressive power of their code. In the former case, producing the code will become slower and more cumbersome, especially in comparison to “cheap and fast” glottographic signs. Producing ideographic codes will also require greater artistry on the part of users, making the ideographic code a less appealing tool for communication than glottographic codes already in use. In the latter case, where the limited expressive power of the code is accepted, complex messages will remain ambiguous – and a verbal gloss will be needed for interpreting complex messages. This will undermine the self-sufficiency of the ideographic code, and make it less useful for the kinds of distal communication for which it is supposedly well suited.
This problem does not seem to be faced by glottographic codes. As pressure increases for a more expressive – and thus more general – sign system, natural languages expand. Their users innovate new words and grammatical forms, to permit the expression of a wider range of ideas (e.g., Moore, Reference Moore2021; Progovac, Reference Progovac2015). However, these new communicative tools seemingly consist largely of the introduction of grammatical forms, and so punctuation marks, rather than on the introduction of new glottographic marks. New words can therefore be handled using existing alphabets, placing little pressure on existing glottographic codes. Even if ideographic codes recombine elements, they surely cannot do so in a manner as minimal and elegant as the couple of dozen phonemes used by individual spoken languages. This is seemingly a consequence of the format of graphic codes: Increasing expressive power means increasing the complexity of ideographic representations. This is not the case with glottographic codes.
We reiterate that our hypothesis is crude and untested. However, it generates a prediction. It suggests that code systems that start off as ideographic may, over time, become more glottographic – as pressure grows for a more expressively powerful code system. In such cases, a natural compromise is that, as ideographic capacity is reached, then the expressive power of the code can be increased by incorporating glottographic elements of extant natural languages. Meanwhile, pure ideographic codes will remain suitable only for specialized use.
The puzzle of ideography is, Morin argues, that in the history of written sign systems, there seem to be no self-sufficient, generalist ideographic codes. Ideographic codes are sign systems that represent ideas without the mediation of an auxiliary code. Most writing systems are generalist because they can be used for a wide range of communicative functions. However, they are not ideographic because they are glottographic: They represent semantic contents only by representing the sounds of words in spoken languages. Without knowledge of the relevant spoken languages, written languages remain uninterpretable. Although ideographic codes exist, their use is restricted to specific and specialized contexts. For example, emojis and musical notation are self-sufficient codes, because they encode meaningful units without the need for external mediation. However, their use is restricted to the expression of simple emotions and music – making their use of limited generality.
Morin's question is why generalist ideographic codes, despite being conceptually possible, have not developed. He rejects one answer to this question, which proposes that ideographic codes are more difficult to learn than nonideographic ones (the “learnability problem”) on the grounds that it underestimates human capacity for learning. Instead he defends a “standardization problem”: The problem with ideographic codes is getting everyone to agree on which code to use. In face-to-face communication, he argues, standardization issues do not arise because people can give and receive real-time feedback to clarify which sign system they are using. However, because ideographic codes are used for communicating with temporally and spatially distant interlocutors, this opportunity for such feedback is missing, and standardization remains a problem.
Morin's account gets something right – namely, the interpretability of ideographic signs. However, this problem of interpretability is not best characterized as a problem of standardization. Standardization would not be an insurmountable challenge if generalist ideographic languages were independently viable. Even if ideographic codes are largely used in distal communication, a viable, generalist ideographic code could be learned and used in face-to-face interaction, with opportunities for face-to-face repair, and in such cases a standardization challenge could be overcome. In that case, the problem with ideographic codes must be more fundamental. We think the underlying issue is not standardization of learnability, but pragmatic interpretation.
As proponents of both the learnability and standardization views agree, ideographic codes are limited. In Morin's words, both explanations agree that “graphic codes cannot encode a broad range of meanings” (target article, sect. 5, para. 1). We interpret this claim as consensus that, in contrast to glottographic codes that exploit the countless combinatorial possibilities of representing phonemes, there may be a comparatively small number of ideographic signs that can be both efficiently reproduced and used to represent ideas in a reasonably unambiguous way. This consensus supports a fairly crude and empirically untested hypothesis, which we nonetheless find intuitive.
Untested hypothesis: There is a limit to the complexity of the messages that ideographic codes can be used to express before they become either unwieldy or uninterpretable. This is a consequence of the very format of ideographic representation, which requires (something akin to) drawing skills and the production of more complex symbols. The limited use of ideographic codes therefore stems from an intrinsic feature (their ideographic format), rather than a contingent factor (the fact they are mostly used asynchronously). This is the main reason for the nonexistence of generalist ideographic codes.
If we are right, then the users of ideographic codes must at some point attempt to plot a course between Scylla and Charybdis. Scylla is the threat of unwieldiness; Charybdis the problem of ambiguity. As messages get more complex, the users of ideographic codes face a choice. Either they must produce more elaborate, and more detailed ideographic codes sufficient for the visual discrimination of similar but potentially important semantic differences; or they must accept the limited expressive power of their code. In the former case, producing the code will become slower and more cumbersome, especially in comparison to “cheap and fast” glottographic signs. Producing ideographic codes will also require greater artistry on the part of users, making the ideographic code a less appealing tool for communication than glottographic codes already in use. In the latter case, where the limited expressive power of the code is accepted, complex messages will remain ambiguous – and a verbal gloss will be needed for interpreting complex messages. This will undermine the self-sufficiency of the ideographic code, and make it less useful for the kinds of distal communication for which it is supposedly well suited.
This problem does not seem to be faced by glottographic codes. As pressure increases for a more expressive – and thus more general – sign system, natural languages expand. Their users innovate new words and grammatical forms, to permit the expression of a wider range of ideas (e.g., Moore, Reference Moore2021; Progovac, Reference Progovac2015). However, these new communicative tools seemingly consist largely of the introduction of grammatical forms, and so punctuation marks, rather than on the introduction of new glottographic marks. New words can therefore be handled using existing alphabets, placing little pressure on existing glottographic codes. Even if ideographic codes recombine elements, they surely cannot do so in a manner as minimal and elegant as the couple of dozen phonemes used by individual spoken languages. This is seemingly a consequence of the format of graphic codes: Increasing expressive power means increasing the complexity of ideographic representations. This is not the case with glottographic codes.
We reiterate that our hypothesis is crude and untested. However, it generates a prediction. It suggests that code systems that start off as ideographic may, over time, become more glottographic – as pressure grows for a more expressively powerful code system. In such cases, a natural compromise is that, as ideographic capacity is reached, then the expressive power of the code can be increased by incorporating glottographic elements of extant natural languages. Meanwhile, pure ideographic codes will remain suitable only for specialized use.
Financial support
Leda Berio's work is supported by the NRW Profillinie “Interact!” (PROFILNRW-2020-135). Berke Can is supported by a University of Warwick Chancellor's International Scholarship. Katharina Helming and Richard Moore are supported by UKRI Future Leaders Fellowship grant MR/S033858/1: The Communicative Mind. Giulia Palazzolo is supported by a La Sapienza University International Scholarship.
Competing interest
None.