Frost's timely review reminds visual word recognition researchers of the rich variety of orthographies in the world's languages. We argue, however, that the variety of orthographies does not lead to the view that “letter-order insensitivity is neither a general property of the cognitive system nor a property of the brain in encoding letters” (target article, Abstract). It is actually unclear what Frost means by the “cognitive procedures that are implicated in processing printed words” (sect. 2.1, para. 1). He could, for example, be referring to the entire process of deriving sound and meaning from the visual form of written symbols, or simply to the process of identifying the symbols and the order in which they appear. We feel that some of Frost's conclusions follow from confusion between these two possibilities. We suggest that the basic perceptual processes supporting the identification of written symbols are universals, and are governed by exactly the same principles as all other forms of visual object recognition. However, what the reader does with those symbols will depend crucially on the properties of the language and on the mapping between those symbols and the sound and meaning of the language.
Consider first the contrast between English, where there is transposed-letter priming, and Hebrew, where there is no transposed letter (TL) priming in lexical decision. As Frost suggests, it might be possible to make some ad hoc structural changes to a model of reading to accommodate this difference. An alternative is to suggest that this difference follows from a fixed and universal model of object/symbol recognition combined with the differing processing demands imposed by languages with contrasting phonological, morphological, and lexical properties. Norris et al. (Reference Norris, Kinoshita and van Casteren2010) and Norris and Kinoshita (in press) have proposed a noisy-sampling model of word recognition in which evidence for both letter identity and letter position/order accumulates over time. Early in time, order information may be very ambiguous, but, as more samples arrive, that ambiguity will be resolved. Even in English, readers are able to tell that JUGDE is not a real word, even though JUGDE will prime JUDGE as much as an identity prime in a task where the prime is presented for about 50 msec. Now consider the implications of this process for the difference in TL priming between English and Hebrew. In Hebrew the lexical space is very dense. Transposing two letters in a root will typically produce a different root. In English, transposing two letters will generally produce a nonword; that is, the closest word may still be the word that the TL prime was derived from. Identifying words in Hebrew will therefore require readers to accumulate more evidence about letter order than in English; that is, because of the differences between the two languages, English readers can tolerate more slop in the system, but the underlying process of identifying the orthographic symbols remains the same. The characteristics of the language impose different task demands on word recognition, but the structural properties of the model remain the same. Note also that whereas Frost suggests that many of the linguistic differences are a consequence of learning different statistical regularities, in this case at least, the difference follows primarily from the contents of the lexicon and does not require the reader to learn about the statistical properties of the language. In line with this view, in the same–different task in which the input is matched against a single referent, not the entire lexicon, robust TL priming effects are observed with Hebrew words (Kinoshita et al., in press). This example is also a counter to Frost's suggestion that the orthographic processing system is not autonomous and is influenced by the language. Here the basic perceptual processes are not modulated by the language at all.
In describing the variety of orthographies, Frost also argues that the way writing systems eventually evolved is not arbitrary, and that orthographies are structured so that they “optimally represent the languages' phonological spaces and their mapping into semantic meaning” (sect. 3, para. 1). But appeals to optimality make little sense unless accompanied by a formal definition of optimality and a procedure for determining what constitutes an optimal solution. Frost's definition of optimality seems to be post hoc, and depends entirely on assumptions about the relative difficulty of different cognitive processes. Note that the development of writing systems is strongly influenced by the writing material available. Cuneiform may be a more “optimal” form of orthography than pictograms containing many curved features to a Sumerian tax collector who has access only to clay tablets and a blunt reed for a stylus.
Frost's evolutionary argument also seems to be based on the assumption that writing systems have evolved to some optimal state. Even if there is an element of truth to the evolutionary argument, there is no reason to assume that writing systems have reached the optimal end of their evolution. This is particularly apparent in cases where there are alternative writing systems for a single language. For example, Japanese uses both kanji, a logographic script imported from China, and kana, a syllabary, which was derived from kanji. Is kana more optimal than kanji? The writing system that is adopted by a particular language necessarily reflects the constraints imposed by the language (e.g., in Japanese, potentially all words can be written by using only the kana syllabary, but this would result in too many homophones which are disambiguated by the use of different kanji characters). But that does not mean that its evolution was driven by the “process of optimization” based on linguistic constraints. In human evolution, writing systems have a very short history (mass literacy is only about 500 years old), and historical and chance cultural events – for example, contact between two cultures, invention of a writing medium, spelling reform, to name just a few – seem to have played a large role, and interacted with, the linguistic constraints in shaping the particular writing system used in a language.
Frost's timely review reminds visual word recognition researchers of the rich variety of orthographies in the world's languages. We argue, however, that the variety of orthographies does not lead to the view that “letter-order insensitivity is neither a general property of the cognitive system nor a property of the brain in encoding letters” (target article, Abstract). It is actually unclear what Frost means by the “cognitive procedures that are implicated in processing printed words” (sect. 2.1, para. 1). He could, for example, be referring to the entire process of deriving sound and meaning from the visual form of written symbols, or simply to the process of identifying the symbols and the order in which they appear. We feel that some of Frost's conclusions follow from confusion between these two possibilities. We suggest that the basic perceptual processes supporting the identification of written symbols are universals, and are governed by exactly the same principles as all other forms of visual object recognition. However, what the reader does with those symbols will depend crucially on the properties of the language and on the mapping between those symbols and the sound and meaning of the language.
Consider first the contrast between English, where there is transposed-letter priming, and Hebrew, where there is no transposed letter (TL) priming in lexical decision. As Frost suggests, it might be possible to make some ad hoc structural changes to a model of reading to accommodate this difference. An alternative is to suggest that this difference follows from a fixed and universal model of object/symbol recognition combined with the differing processing demands imposed by languages with contrasting phonological, morphological, and lexical properties. Norris et al. (Reference Norris, Kinoshita and van Casteren2010) and Norris and Kinoshita (in press) have proposed a noisy-sampling model of word recognition in which evidence for both letter identity and letter position/order accumulates over time. Early in time, order information may be very ambiguous, but, as more samples arrive, that ambiguity will be resolved. Even in English, readers are able to tell that JUGDE is not a real word, even though JUGDE will prime JUDGE as much as an identity prime in a task where the prime is presented for about 50 msec. Now consider the implications of this process for the difference in TL priming between English and Hebrew. In Hebrew the lexical space is very dense. Transposing two letters in a root will typically produce a different root. In English, transposing two letters will generally produce a nonword; that is, the closest word may still be the word that the TL prime was derived from. Identifying words in Hebrew will therefore require readers to accumulate more evidence about letter order than in English; that is, because of the differences between the two languages, English readers can tolerate more slop in the system, but the underlying process of identifying the orthographic symbols remains the same. The characteristics of the language impose different task demands on word recognition, but the structural properties of the model remain the same. Note also that whereas Frost suggests that many of the linguistic differences are a consequence of learning different statistical regularities, in this case at least, the difference follows primarily from the contents of the lexicon and does not require the reader to learn about the statistical properties of the language. In line with this view, in the same–different task in which the input is matched against a single referent, not the entire lexicon, robust TL priming effects are observed with Hebrew words (Kinoshita et al., in press). This example is also a counter to Frost's suggestion that the orthographic processing system is not autonomous and is influenced by the language. Here the basic perceptual processes are not modulated by the language at all.
In describing the variety of orthographies, Frost also argues that the way writing systems eventually evolved is not arbitrary, and that orthographies are structured so that they “optimally represent the languages' phonological spaces and their mapping into semantic meaning” (sect. 3, para. 1). But appeals to optimality make little sense unless accompanied by a formal definition of optimality and a procedure for determining what constitutes an optimal solution. Frost's definition of optimality seems to be post hoc, and depends entirely on assumptions about the relative difficulty of different cognitive processes. Note that the development of writing systems is strongly influenced by the writing material available. Cuneiform may be a more “optimal” form of orthography than pictograms containing many curved features to a Sumerian tax collector who has access only to clay tablets and a blunt reed for a stylus.
Frost's evolutionary argument also seems to be based on the assumption that writing systems have evolved to some optimal state. Even if there is an element of truth to the evolutionary argument, there is no reason to assume that writing systems have reached the optimal end of their evolution. This is particularly apparent in cases where there are alternative writing systems for a single language. For example, Japanese uses both kanji, a logographic script imported from China, and kana, a syllabary, which was derived from kanji. Is kana more optimal than kanji? The writing system that is adopted by a particular language necessarily reflects the constraints imposed by the language (e.g., in Japanese, potentially all words can be written by using only the kana syllabary, but this would result in too many homophones which are disambiguated by the use of different kanji characters). But that does not mean that its evolution was driven by the “process of optimization” based on linguistic constraints. In human evolution, writing systems have a very short history (mass literacy is only about 500 years old), and historical and chance cultural events – for example, contact between two cultures, invention of a writing medium, spelling reform, to name just a few – seem to have played a large role, and interacted with, the linguistic constraints in shaping the particular writing system used in a language.