16.1 Introduction
The 1990s saw a sharp rise in the empirical study of literacy interventions across Asia, Africa, Latin America, and other clusters of countries that together have been called the Global South. Arguably, the impetus was the World Conference on Education for All held in Jomtien (Thailand) in March 1990 (Inter-Agency Commission, 1990). A decade on, the effort was formalized further to achieve the UN Millennium Development Goal (MGD 2) of universal primary education by 2015. Providing primary education at scale meant that literacy instruction at scale took center stage. Alongside this development, the measurement of children’s learning was maturing as a field, with initiatives such as the multiagency Learning Metrics Task Force (LMTF, 2012–16) turning exceptionally influential in what would be measured and therefore valued as an indicator of success for an educational intervention. It identified literacy and communication as a learning domain, and the ability to read as one of seven areas of measurement for global tracking. The next goalpost, the UN Sustainable Development Goal (SGD 4), is quality education for all by 2030. It has brought to the fore the need for sensitivity as to what constitutes “quality” and to whom. It is against this background that we review experiments with literacy interventions in the Global South. Rigorous reviews of the experiments under consideration have been reported earlier in the context of literacy and foundation learning in developing countries (Reference Nag, Chiat, Torgerson and SnowlingNag et al., 2014; Reference Nag, Snowling and AsfahaNag, Snowling et al., 2016). Here, we analyze their sensitivity to contextual and cultural factors. We start with a description of the perspectives that led to the focus areas in our qualitative analysis. Then, we present our methodology of using cultural probes to examine interventions and their evaluation. This is followed by a narrative synthesis of the findings and a discussion about the implications for evaluation of the next generation of literacy interventions.
16.2 Why Contextual and Cultural Sensitivity?
The theoretical roots of a contextual analysis of literacy interventions come from sociocultural perspectives on literacy learning, including the “literacy as social practices” account. Within these perspectives, language is considered as a fluid and messy construct, power relations exist in the social contexts where children’s language and literacy learning occur, and cultural processes intertwine with teaching-learning processes. Literacy and language education, particularly in bi- and multilingual contexts, are therefore necessarily also influenced by sociolinguistic and sociocultural processes. Such a situated, contextual, and cultural perspective has traditionally recorded the language and literacy practices found in children’s schools, classrooms, and homes (e.g., Reference HeathHeath, 1982). The perspective, however, can also be used to problematize and examine the specific language or literacy activities that are chosen and prioritized in an intervention and what is focused on when assessing their effectiveness.
A spotlight on particulars may appear to be at cross-purposes with accounts that solely draw upon universal within-child factors to explain literacy learning. Examples of universal factors include children’s phonological awareness, vocabulary knowledge, and grammar knowledge. These are seen as the cognitive-linguistic foundations that support decoding and reading comprehension across linguistic contexts. Other within-child factors include speed of processing, executive functions, working memory, visual processing, and inference making. The language, the writing system, and whether the learner is a novice or an expert may define the strength of associations between these factors and component skills of literacy (e.g., Eritrean and Ethiopic scripts: Reference Asfaha, Kurvers and KroonAsfaha, Kurvers, & Kroon, 2009; South and Southeast Asian scripts: Reference Nag, Perfetti and VerhoevenNag, 2017). Since one explanatory model is not enough to capture language, script, and learner variety, the most tenable account is a moderate rather than a strong universalist view of literacy learning. A further issue in favor of the universal account is its explanatory power in terms of children’s literacy attainments as well as of why some activities support learning better than others. These explanations of why there are individual differences in attainments and selective intervention effects are, however, not exhaustive; not all aspects of literacy outcomes can be explained by within-child factors (e.g., the role of socioeconomic status: Reference Hungi and ThukuHungi & Thuku, 2010; Reference Lervåg, Dolean, Tincas and Melby‐LervågLervåg et al., 2019; “opportunities to learn” in school and home: Reference Rolleston and KrutikovaRolleston & Krutikova, 2014; home practices: Reference Dulay, Cheung and McBrideDulay, Cheung, & McBride, 2019; Reference Mount-CorsMount-Cors, 2011; Reference Puranik, Phillips, Lonigan and GibsonPuranik et al., 2018). Thus, an account that includes contextual factors improves our understanding of outcomes.
16.2.1 Experiments and Confidence Expressed by Researchers
Methodologically rigorous intervention trials have been considered essential to assessing the effects of literacy interventions and social policy and reform more broadly (Reference Banerjee and DufloBanerjee & Duflo, 2011; Reference Snowling and HulmeSnowling & Hulme, 2011). These are experimental studies that measure the effect on clearly isolated and specified variable(s) (e.g., children’s decoding efficiency) of a well-controlled and carefully manipulated independent variable (the target intervention). Confidence in inferences of causality (the intervention led to the recorded change in the variable), is greatest when it is a randomized controlled trial (RCT), and the use of this design to assess literacy interventions in the Global South has grown.
Randomized controlled trials have been harnessed to understand what the specific causal pathways might be to change. However, this experimental design has been criticized for its reductionist approach and poor attention to contexts of intervention. Despite their rigor, RCTs have failed to provide the level of information within education that the method has provided within, for example, health, nutrition, sanitation, finance, and agriculture. One concern is that the evaluations are in the short term, while longer timescales are needed to better understand the enduring effects of an educational intervention. Incorporating a longer timescale within the RCT is an issue of study design, and there is good reason to believe that longitudinal datasets, despite their rarity within education, can be achieved within the Global South (e.g., Turkey: Reference Kağitçibaşi, Sunar, Bekman, Baydar and CemalcilarKağitçibaşi et al., 2009; India, Vietnam, Peru, and Ethiopia: the Young Lives datasets). The second concern is that measurement has progressed regarding within-child variables but not regarding contextual variables such as quality of teaching and how interventions are reinterpreted by key stakeholders and implementers. A third concern is for experimental trials to more robustly incorporate insights from qualitative methodologies. Examples of qualitative methods include interviews, open-ended field notes, and ethnographic studies. Combining the RCT with qualitative data sources can also help push the focus beyond within-child cognitive-linguistic variables to contextual data such as, for example, sociolinguistic, sociocultural, socioeconomic, institutional, and implementation- and opportunity-related variables. A succinct summary of why the contextual and mixed-methods approach matters is found in the following:
[D]espite a current emphasis in research and policy on attempting to determine what works, contexts matter. The fact that contextual influences are multiple and complex makes the work of reading researchers more difficult, and pushes the field to remember that the focus needs to be on what works, for whom, when, why, and how.
This type of research design assumes an added urgency when considering intervention studies in the Global South. First, many of the interventions we reviewed in middle- and low-income countries were imported programs that were first developed in high-income countries, prompting questions about contextual appropriateness. Second, researchers of reviewed studies expressed confidence in an adapted intervention, and we were interested in examining whether this reported confidence in the intervention had been assessed in the context of the local culture, language, and educational practices.
16.2.2 Sociocultural Responses to Including Context in Literacy Research
Central to the discussion of context is culture that may be said to refer to “the daily patterns of living (cultural practices) that allow individuals to relate to the surrounding social order” (Reference Rueda, Kamil, Afflerbach, Pearson and MojeRueda, 2011, p. 85). Put simply, culture encompasses everyday activities and provides individuals with historically situated patterns for action. Culture is dynamic and it can be assumed that there are many cultures, rather than one grand culture, within a social group. This attention to daily practices can be considered as a response from the sociocultural perspectives to literacy research to calls for avoiding the potential for a “reductionist” approach when mounting an RCT. Studies of cultural factors have focused on how these factors influence school-based acquisition of literacy and learning achievement. These studies most often engage with disadvantaged communities of learners to correct the view that cultures of disadvantaged communities show deficits and to posit that their cultural heritage is a positive resource to harness for learning (e.g., Reference Asfaha and KroonAsfaha & Kroon, 2011; Reference de la Piedrade la Piedra, 2010; Reference Mount-CorsMount-Cors, 2011).
A main concern in this perspective is how students outside the school or mainstream culture face obstacles in achievement when compared with those from within that culture. It is hypothesized that these cultural differences might have an effect on intrapersonal cognitive, motivational, and affective conditions and on interpersonal contexts and settings, “serving to facilitate or constrain participation and interaction” (Reference Rueda, Kamil, Afflerbach, Pearson and MojeRueda, 2011, p. 92). Within this perspective, teachers are therefore urged to practice cultural responsiveness, where they accept and use students’ home language and literacy practices and interact with students in ways consistent with the latter’s home values (Reference Mount-CorsMount-Cors, 2011; Reference Zhao, Valcke, Desoete and VerhaegheZhao et al., 2012), and realize that “storytelling and question answering” and the role played by peer learning may be different in different cultures (Reference Nag, Snowling and AsfahaNag, Snowling, & Asfaha, 2016; Reference Rueda, Kamil, Afflerbach, Pearson and MojeRueda, 2011). The use of culturally relevant materials and content and deployment of culturally familiar materials are argued to have a positive impact on children’s motivation and achievement (Reference Li, Kamil, Afflerbach, Pearson and MojeLi, 2011; Reference WatanabeWatanabe, 2015).
Although the bulk of the studies in the sociocultural perspective to literacy have been qualitative in methodological approach, quantitative and mixed-methods studies that examine the linkages between cultural factors and literacy learning outcomes are becoming available. For example, Reference Li, Kamil, Afflerbach, Pearson and MojeLi (2011) described studies that attempted to measure cultural discontinuities between home and school contexts using a factorial analysis, in order to then use the cultural discontinuity scores to study associations with school achievement.
16.2.3 Literacy as Social Practice
The focus on context in literacy learning and research necessarily leads one to look at how literacy is viewed as a social practice in what has come to be called New Literacy Studies (Reference StreetStreet, 1984; Reference Street1995). Street argued that the social context for literacy learning varied and “that there are different literacy practices that carry with them different values and affordances” (Reference StreetStreet, 2016, p. 336). Within this perspective, the inherently context-free cognitive-linguistic view of literacy learning is contrasted with an “ideological” view, where the focus is on power relations such as those present between the learner and the teacher, the learner and the interventionist, and the learner and the policymaker, to list a few. These power relations are linked to purpose and meaning and what may be considered normative, and include beliefs about specific practices, and how these interact with social stratifiers such as the learner (and the other’s) gender, race, age, and health status. One outcome of analysis of power in literacy practices is the critical view on how the literacy that is taught in formal school settings is privileged as the “universal standard”(Reference StreetStreet, 2016, p. 337) and the status of out-of-school literacy is no longer valued. While we acknowledge that more needs to be done in the Global South to understand the learning and knowledge gathered by children outside institutionalized school curricula, this is not the focus of this chapter and hence will not be taken up for further discussion (see Nag, Chapter 15 and Friedlander & Goldenberg, Chapter 18 in this volume).
16.2.4 Sociolinguistic and Ethnographic Perspectives on Diversity
Concepts such as diversity and multiculturalism are limited in their power to explain the complexity of different settings (e.g., the multilingual urban centers in the UK: Reference Blackledge and CreeseBlackledge & Creese, 2017). Since similar complexities pre-existed in the margins, as well as in mainstream settings, there is a realization that discussions of diversity in reified terms such as language, ethnicity, and race fail to capture the “diversification of diversity” or extreme diversity across the Global South. In addition, our views of linguistic and literacy learning contexts have been informed by conceptualizations of language, multilingualism, and social uses of language within sociolinguistics and ethnographic research, and the intensification of diversity from the mobility accompanying within-country economic shifts and broader globalization, as captured in diaspora research.
Sociolinguistic-ethnographic perspectives view language, in addition to its structural aspects, as something essentially social, and as a fluid and messy construct that lacks a clear boundary when viewed in relation to other languages. Multilingual encounters are constructed in such a way as to unveil the connotations of power or hierarchical relations apparent in the social world. For example, the choice of one or the other national language as school language or as a target of intervention is seldom neutral and carries different implications to different language groups. What is needed is, then, a broader, open yet critical, gaze at how people from different backgrounds come to interact in everyday encounters with each other and with reading and writing (after Reference Blackledge and CreeseBlackledge & Creese, 2017).
One indication of the incorporation of context and culture is when literacy interventions acknowledge and are responsive to local factors. For example, all reading programs, irrespective of whether they focus on phonics, whole language, or a mix of the two approaches, are necessarily delivered within the “real-world” milieu of historical, cultural, and linguistic processes (e.g., postcolonial; strong oral tradition; home languages unrecognized in school). We therefore conducted a cultural “audit” of intervention studies published between 1991 and 2016 in low- and middle-income countries. For global literacy, 1990 is the landmark year when the World Conference on Education for All in Jomtien, Thailand (the Inter-Agency Commission, 1990) triggered an international movement to provide literacy instruction (and more) to all children. We first identified contextual factors that were described as important within the sociocultural literature, drawing especially on ethnographies. Intervention studies were reviewed for how these cultural and contextual elements were accounted for in the intervention design, assessment tool, analytic plans, and the interpretation of results. Since all interventions in the review were methodologically high-quality experimental studies, identifying what these studies considered important or relevant to report was seen as indicative of the theoretical and implementational importance given to local factors. The nature of coverage, and thus “cultural sensitivity” in these experiments, has the potential to inform the future design of interventions and also the theoretical underpinnings of literacy research in the Global South.
16.3 Qualitative Analysis of Cultural Probes in Intervention Design and Assessment
16.3.1 Research Questions
Our qualitative review aimed to answer the following questions:
* Are contextual/cultural factors represented in evaluation studies of literacy interventions?
* What are the main representations of these factors?
* Are there gaps in representation of these factors?
Randomized controlled experiments conducted in countries identified as low- and middle-income (World Bank, OECD) provided the database for the review. To examine our questions about representations of contextual and cultural factors, we created a template to guide our cultural probes. We looked for the treatment of contextual factors in three sections of the intervention evaluation reports: (a) the design of the intervention; (b) the assessments used; and (c) the analysis processes. We also noted information not covered by these categories, under issues related to “language” and “other.” Assessors (usually the two authors or the first author and a social and cultural psychologist) independently looked for the ways that studies dealt with language diversity, local practices of education, and other cultural elements in different stages of the intervention study. Since the language in which the literacy intervention is carried out has important implications for learning, whether home or school, minority or dominant, foreign or national, we examined how this was recognized in the design of the interventions. Similarly, local educational practices such as choral singing and cultural elements in local games or folk tales may constitute important factors in a particular context (after Reference Nag, Snowling and AsfahaNag, Snowling et al., 2016), and we probed whether these were represented in literacy interventions and studies of their effectiveness, and if so, how. We also looked to see how social stratifiers, including gender, income, rural–urban settings, and nomadic-settled lifestyle, were acknowledged. Therefore, our cultural probes aimed to capture contextual factors in the design of interventions, the assessments used in evaluating the interventions, and the analysis of results of the assessment. The thematic analysis of the cultural probe data from across interventions broadly linked to two issues: the language context (e.g., use of folk tales) and the culture and context, not related to language but still relevant (e.g., asking men to run assessments in a conservative school for girls). These qualitative data form the basis of this chapter.
The RCTs for our analysis were published over twenty-six years from 1991 to 2016. All reported an intervention conducted in a low- and middle-income country and had a thematic focus on literacy and foundation learning. These were experiments that had already been reviewed for methodological quality based on the principles of appropriateness, rigor, validity, reliability, openness, transparency and cogency, and clarity of conceptual framing (Reference Nag, Chiat, Torgerson and SnowlingNag et al., 2014). Only studies that demonstrated adherence to these principles and were rated as high or moderate-high are included in the current review.
16.3.2 Focus on Randomized Controlled Experiments
The review is based on twenty RCTs reporting seventeen interventions in nine countries. The duration of interventions ranged from four weeks to two years, and the focus of intervention was literacy and numeracy skills (eleven interventions), language skills (four), and school readiness (two). The facilitators were teachers (fourteen), trained volunteers (three), trained facilitators (one), and others such as specialist trainers, school management staff, government officials, and personnel from an agency external to the school system (six). For details see Table 16.1.
Table 16.1 Included randomized controlled experiments by intervention focus, duration, student, and facilitator details
| Author, date, Country | Intervention focus | Duration | Level/age targeted | Facilitator | Sample |
|---|---|---|---|---|---|
| Reference Abrami, Wade, Lysenko, Marsh and GiokoAbrami et al. (2016), Kenya | Interactive, multimedia literacy instruction | 13 weeks (90 minutes weekly lessons) | Grade 2 (7–10 years) | Trained teachers with support from trainers | 354 students |
| Reference Banerjee, Cole, Duflo and LindenBanerjee et al. (2007) (First randomized experiment), India | Remedial instruction on basic literacy and numeracy | 2 hours per day for 1 year | Grades 3 & 4 (7–9 years) | Trained volunteers | 98 schools |
| Reference Banerjee, Banerji and BerryBanerjee et al. (2016), India | Math and literacy instruction; teacher training; materials support | 40 days of “learning camps” plus 10-day summer camps | Grades 1–5 (6–12 years)Footnote ** | Trained teachers, volunteers, government officials | 1,156 schools; 35,044 students |
| Reference Borzekowski and HenryBorzekowski & Henry (2010), Indonesia | Educational multimedia targeting language and life skills (Jalan Sesama videos) | 14 weeks (1 episode per week) | Preschool (3–6 years) | School teachers | 160 children; 160 parents |
| Reference Brooker and HallidayBrooker & Halliday (2015) & Reference Jukes, Turner and DubeckJukes et al. (2016), Kenya | Enhanced literacy instruction | 2 years | Grade 1 (90 percent: 7–10 years)Footnote ** | Teachers and health workers | 101 schools; 5233 students |
| Reference Davidson and HobbsDavidson & Hobbs (2013) and Reference Piper and KordaPiper & Korda (2011), Liberia | “teaching literacy using whole-class instruction, prescriptive lesson plans and close monitoring and supervision”(p. 291) | 18 months | Grades 2 and 3, (average 12–13 years old)Footnote ** | Trained teachers; coaches | 176 schools; 2,988 students; |
| Grades 1– 6Footnote ** | Second intervention = 30 schools | ||||
| Reference Dowd, Borisova, Amente and YenewDowd et al. (2016), Ethiopia | Enhanced ELM instruction comprising 50 early literacy & 50 math games | 5 months | Preschool (5–6 years) | Trained facilitators | 36 Early Childhood centers; 451 students |
| Reference Gomez FrancoGomez Franco (2014), Chile | Read-aloud program | 1 year | Preschool (3–5 years) | Trained teachers | 92 pre-kindergarten classrooms |
| Reference He, Linden and MacLeodHe, Linden, & McLeod (2008), (First randomized experiment) India | Computer-assisted learning (CAL) English program with activities, games, and materials support | 1 year | Grades 2 & 3 (6–9 years)Footnote * | Trained teachers; outside agency | Experimental grp.: 97 classes; 2,699 students |
| Control grp.: 97 classes; 2,618 students | |||||
| Reference He, Linden and MacLeodHe, Linden, & McLeod (2009), (First randomized experiment) India | Intervention with stories, flashcards, and charts | 6 weeks | Grade 1 (4–5 years) | Trained instructors | 67 schools; 2,089 students |
| Reference Kerwin and ThorntonKerwin & Thornton (2015), Uganda | Mother-tongue instruction, teacher support, & teacher training | 1 year | Grades 1–3 (7–9 years) | Trained teachers | 38 schools; 1,900 students |
| Reference Lakshminarayana, Eble and BhaktaLakshminarayana et al. (2013), India | Afterschool remedial math and language support; additional material support for girls | 18 months | Grades 2–4 (4–12 years) | Trained community volunteers | Experimental grp.: 107 villages (54 villages received an additional intervention component) |
| Control grp.: 107 villages | |||||
| Total: 4,461 students | |||||
| Reference Lucas, McEwan, Ngware and OketchLucas et al. (2014), Reference Oketch, Ngware, Mutisya, Kassahun, Abuya and MusyokaOketch et al. (2014), Kenya, Uganda | Teacher training on “scaffolding approach to literacy instruction,” mentoring of teachers, and material support | Around 18 months | Grades 1–3 (average 6–9 years)Footnote * | trained teachers, head teachers, and school management | Kenya: Experimental grp: 3,574 students Control grp.: 3,441 students |
| Uganda: Experimental grp: 3,441; Control: 3,576 | |||||
| Reference Opel, Ameer and AboudOpel, Ameer, & Aboud (2009), Bangladesh | Whole-class dialogic reading | 4 weeks | Preschool (5–6 years) | Trained preschool teachers | 80 students |
| Reference Ozler, Fernald and KarigerOzler et al. (2016), Malawi | Teacher training; teacher incentives; parent education | About 6 weeks of teacher training | Preschool (3–5 years) | Trained teachers | 189 Community-based Child Care Centers; 5011 children |
| Reference Piper, Zuilkowski and MugendaPiper et al. (2014), Kenya | Curriculum, teaching materials and teaching practices aligned with current research; teacher training; materials support | 1 year | Grades 1 & 2 (6–8 years) | Trained teachers | 73 schools; 2,082 students |
| Reference Piper, Zuilkowski and Ong’elePiper et al. (2016), Kenya | Mother-tongue literacy instruction | 1 year (150 days of instruction) | Grades 1 &2 (7–9 years) | Trained teachers | 414 schools; 1,850 students |
* Inferred age band.
** Several older children, for example, “most primary grade classrooms are filled with overage students” (Reference Davidson and HobbsDavidson & Hobbs, 2013, p. 285).
16.4 Global Findings on Reform Programs in Literacy Education
The twenty interventions under review attempted to introduce reform programs in literacy education in nine countries in the Global South. We examined how contextual realities were reflected in the design of these interventions, the assessments used in their evaluation, the analysis of results from these assessments, and any other specific intervention-relevant aspect of a particular study.
16.4.1 Nature and Origins of Interventions
The reviewed interventions showed modest localization embedded within a lot of borrowings. The main target of these interventions was increasing the literacy skills of young learners mostly between ages three and nine. Intervention components included provision of teaching materials, introducing best practices in teaching literacy, and/or teacher preparation. A typical description of such an intervention can be found from a study in Kenya, where the literacy intervention was designed to align local teaching practices with “successful models of literacy acquisition in an alphabetic language” in order to tackle “perceived barriers to successful instruction” (Reference Brooker and HallidayBrooker & Halliday, 2015, p. 3). In addition, at least in this same project from Kenya, the intervention “sought to build on effective instructional practices that were already in use locally” (Reference Jukes, Turner and DubeckJukes et al., 2016, p. 451).
However, for most interventions, the ideas introduced in the reforms came from outside the intervention country, mainly from a high-income country in the West. Usually, the central ideas were either borrowed from high-income countries and adapted locally by research teams, or were globally promoted by international agencies and accepted by local NGOs. For example, the multimedia program ABRACADABRA (A Balanced Reading Approach for Children Always Designed to Achieve Best Results for All), also known as ABRA, was based on “systematic reviews of evidence about what works in reading and spelling” by the National Reading Panel that was tasked to address local needs in the United States (Reference Abrami, Wade, Lysenko, Marsh and GiokoAbrami et al., 2016, p. 947). An intervention in Ethiopia was based on an Emergent Literacy and Math (ELM) program developed by the international NGO Save the Children, and provided resources for reading, play, and cooperative games to support literacy and math skills and physical and socio-emotional development (Reference Dowd, Borisova, Amente and YenewDowd et al., 2016). Sometimes, explicit geographic reference was made to the origin of the best practices espoused by the intervention. A study in Chile with three-to-five-year-olds, for example, relied in its rationale for intervention development on recommended practices in the United States (Reference Gomez FrancoGomez Franco, 2014).
In other cases, we found acknowledgment of the need to bring local practices in line with internationally proven practices “by aligning curriculum and teacher practices with current research” (Kenya: Reference Piper, Zuilkowski and MugendaPiper et al., 2014, p. 11). Similar references to the global literature are found in several other studies (e.g., Reference Jukes, Turner and DubeckJukes et al., 2016; Reference Opel, Ameer and AboudOpel et al., 2009). Against this background, adaptations of imported interventions to achieve cultural appropriateness aimed to give due consideration to local cultures and languages. The Primary Math and Reading intervention in Kenya, PRIMR, for example, included locally relevant stories in the literacy intervention (Reference Piper, Zuilkowski and MugendaPiper, Zuilkowski, & Mugenda, 2014), and local languages in the consent forms and interview guides (e.g., Afaan Oromo in Ethiopia, Reference Dowd, Borisova, Amente and YenewDowd et al., 2016). A match of intervention language with home language was found in a Bangla dialogic story-reading intervention by Reference Opel, Ameer and AboudOpel et al. (2009); all the children and teachers in the study spoke Bangla at home and the Bangla books used for the interventions were locally produced. Taken together, while the effort to adapt foreign ideas into local contexts was present in more than half of the interventions we reviewed, the representation of contextual factors was limited by the inadequacies of definitions of what constitutes the local.
16.4.2 Assessments Used to Evaluate Interventions
Measurement of change is a core element of intervention studies, and we found a preference for globally available assessment frameworks. Prominent among these were the Early Grade Reading Assessment (the United States), the Early Grade Mathematics Assessment (the United States), the Schedule of Early Number Assessment (Australia), and the Reading Recovery Observation Survey (New Zealand). In addition, the use of the Picture Vocabulary subscale of the Woodcock–Muñoz Language Survey Revised (Chile: Reference Gomez FrancoGomez Franco, 2014) and the related Peabody Picture Vocabulary Test (Malawi: Reference Ozler, Fernald and KarigerOzler, Fernald, Kariger et al., 2016) confirms the popularity of the Peabody-inspired vocabulary tests in interventions in the Global South (Reference NagNag, 2016).
Locally produced assessments were available but uncommon, and varied in depth of localization. An example of a more extensive attempt at creating “culturally appropriate” tests is the Malawi Development Assessment Tool (MDAT), for rural Malawi (Reference Ozler, Fernald and KarigerOzler et al., 2016). The language, fine motor/perception, gross motor, and personal-social subscales of the MDAT used locally available materials. For example, the language subscale contains items asking the child to explain the use of objects by showing locally available objects “such as a small, homemade broom (used for sweeping), and a matchbox (containing matches, used for lighting stove),” and replaced “apple” with “papaya,” “a fruit that is well known throughout the country, and was estimated to be of similar difficulty as the word ‘apple’ would be in the United States” (Reference Ozler, Fernald and KarigerOzler et al., 2016, pp. 56, 57). Reference Dowd, Borisova, Amente and YenewDowd et al. (2016) reported similar adaptations using only objects familiar to the child such as rocks, beads, beans, or bottle caps. Translations were, however, by far the most common adaptation. A little over half of the interventions we reviewed (nine interventions) adapted borrowed tests into local intervention languages, with only one third (29 percent, or five) developing own assessments whether in a local language (three interventions) or English (two). Reference Lucas, McEwan, Ngware and OketchLucas et al. (2014), for example, developed test items in English first and then translated these into the languages of instruction in project sites (Kenya: Swahili; Uganda: Lango).
Two other points are noteworthy: assumptions of when language adaptation is not needed and decisions of when local languages do not need to be the target language. First, close to one fourth of interventions (23.5 percent, or four) retained the language of borrowed assessments because school languages in chosen countries were similar; English (three interventions) and Spanish (one). Here the assumption that the assessment tool did not need language adaptation appeared to be made despite the body of evidence showing how language varieties change across geographies (e.g., World Englishes: Reference Rose and GallowayRose & Galloway, 2019). The exception was the acknowledgment that borrowing a North American reading achievement measure would “lack cultural sensitivity to adequately capture the development of reading skills in Kenyan students” (Reference Abrami, Wade, Lysenko, Marsh and GiokoAbrami et al., 2016, p. 962). Second, Reference Davidson and HobbsDavidson and Hobbs (2013) noted that public schools in Liberia use English as a language of instruction and the researchers felt they had to go along with this policy. In this and seven other studies in our review, where English was the language of intervention and assessment, it is not always clear what the roles of the local languages were in administering the assessment. In at least one study, the local language, Marathi, was used to explain a question set even if said in English first (Reference He, Linden and MacLeodHe et al., 2008). In summary, our review suggests variable engagement with the idea of localization of assessment tools coupled with reduced transparency in reporting of localization protocols.
16.4.3 Treatment of Languages and Language Diversity
A further dimension we examined is related to local and nonstandard language varieties in intervention sites. Two interventions aimed at directly developing local-language literacy (Kenya: Reference Piper, Zuilkowski and Ong’elePiper, Zuilkowski, & Ong’ele, 2016; Uganda: Reference Kerwin and ThorntonKerwin & Thornton, 2015). Sometimes the focus on local languages included working, as Reference Kerwin and ThorntonKerwin and Thornton (2015) noted in their study in Uganda, to raise awareness by “engaging with parents and the local community to communicate the benefits of mother tongue instruction” (p. 4). However, most interventions (47 percent, or eight interventions) dealt with colonial languages (English, Spanish). In the rest, literacy instruction was in a local language: Telugu, Bangla, Hindi (in two interventions), Swahili (four), Chichewa, Leblango, Afaan Oromo, Bahasa Indonesia, Javanese, Sundanese, Lango, Marathi (two each), and Urdu, Kikamba, and Lubukusu (one each). Many of the interventions dealt with more than one of these languages (e.g., parallel programs in Lango and Swahili in Reference Lucas, McEwan, Ngware and OketchLucas et al. 2014: Uganda and Kenya) or two-language interventions covering a local language and English (e.g., Swahili and English in Reference Brooker and HallidayBrooker & Halliday, 2015: Kenya). However, the bias in intervention language is either for a national or state language, or languages that have regional dominance even if they are without official status. This skew, arguably, reflects local school provision, the neglect of mother-tongue education, or the absence of a more differentiated language education policy (see also Verhoeven & Severing, Chapter 4; Nakamura & Holla, Chapter 8, and Morgan et al., Chapter 10 in this volume). It was also not clear in many cases (35 percent, or six interventions) whether the intervention language was the language of most of the students involved in the experimental intervention. For example, in a multimedia intervention using television programming, Reference Borzekowski and HenryBorzekowski and Henry (2010) used local languages for the intervention but failed to make clear which of the home languages of participants (Sundanese, Bahasa, Javanese) were used. This remains unclear also with relation to the language of measurement and how the analyses factor in home languages to interpret the results. Thus, the reviewed studies, even if methodologically rated as rigorous, fall short in reporting of the language profile of participating children in literacy interventions. This is clearly identified as an area that needs researcher attention.
16.4.4 Treatment of Local Literacy Resources and Practices
Unlike the mixed views on local languages, many studies portray local literacy practices in a mostly negative light. Echoing debates in the global literature cited in the first section, some experiments that we reviewed acknowledged study limitations (e.g., with comprehension measures in Reference Gomez FrancoGomez Franco, 2014), short-lived gains (e.g., limited impact on vocabulary of dialogic reading in Reference Opel, Ameer and AboudOpel et al., 2009), and reservations about scalability. Traditional literacy and educational practices inside the contexts of these studies were sometimes cited as the cause behind these problems. For example, business-as-usual educational practices were described as reasons behind a perceived lack of scalability in a study in India: “The key challenge to mainstreaming the program in government schools was the tendency to revert back to the traditional curriculum and school organization” (Reference Banerjee, Banerji and BerryBanerjee et al., 2016, p. 27). Another example from Kenya cited the reason behind lack of intervention effects on “vocabulary-related skills such as decoding and sight-reading” as the “emphasis in the Kenyan curriculum on this aspect of reading” (Reference Abrami, Wade, Lysenko, Marsh and GiokoAbrami et al., 2016, p. 961).
Some studies tried to take advantage of local resources. In an intervention from India, Reference Banerjee, Cole, Duflo and LindenBanerjee et al. (2007) reported on a remedial program where volunteer young women, the Balsakhi, worked to improve the basic skills of children who were underachieving in Grades 3 or 4. The intervention was based on human resources drawn from the same communities, thus, arguably, ensuring that students shared a common background with the facilitators even if not with the designated schoolteachers. In another study, researchers used community mobilization in selecting community volunteers (India: Reference Lakshminarayana, Eble and BhaktaLakshminarayana et al., 2013). These practices may appear as culturally responsive although the influx of an untrained volunteer teaching workforce might destabilize already fragile education systems in these contexts (Reference Nag, Chiat, Torgerson and SnowlingNag et al., 2014).
16.4.5 Gaps in the Representation of Contextual Factors in Interventions
One focus of our review was the treatment of languages in the intervention areas. The main reason for this was the role of language in literacy learning and the relevance of sociolinguistic dynamics of power and identity in multilingual contexts of interventions. Although half of the interventions deal with English and the rest with local languages, many of them (35 percent of all the interventions) fail to mention whether the languages of their interventions are indeed the home languages of the entire student body in their study areas. For example, an intervention from India stated the programs evaluated were “implemented in northern states of India, in which Hindi is the primary written and spoken language” (Reference Banerjee, Banerji and BerryBanerjee et al., 2016, p. 8). There is no acknowledgment that Hindi was not the home language of all the students in the study, an inference easily made given the linguistic diversity of the said region.
The Northern Uganda Literacy Project (NULP) employed mother-tongue literacy instruction, teacher support, and a training model to improve Leblango, local-language, and also English instruction and learning. In a section reporting the mixed results from the intervention, the authors had to go as far as to point out that Leblango learning has no detrimental effect on English learning, stating that “there is no evidence that the NULP harms students’ progress in learning English” (Reference Kerwin and ThorntonKerwin & Thornton, 2015, p. 19). And in an intervention on the use of the mother tongue, the intervention reserved less than 20 percent of the training time for “strategies for teaching in the mother tongue” (Reference Piper, Zuilkowski and Ong’elePiper et al., 2016, p. 787). Many of the interventions (47 percent) were in the global language of English. However, a corollary sociolinguistic discussion related to its use in middle- and low-income countries (e.g., as a source of inequality) was usually absent. In place of this, occasionally the studies mentioned the perception among educators that the use of local languages might be harmful to the development of the school language, English (Reference Kerwin and ThorntonKerwin & Thornton, 2015). Reference Lucas, McEwan, Ngware and OketchLucas et al. (2014) observed similar attitudes among Kenyan teachers as they often used English in their classrooms, despite the official Swahili-medium early primary education policy.
Another issue is related to the potential of cross-language analysis to highlight both the universal aspects of reading and the language- and orthography-specific peculiarities that aid or hamper implementation of a literacy intervention. In their study, Reference Brooker and HallidayBrooker and Halliday (2015) stated that “developing oral language skills is prioritised over teaching the relationships between sounds and symbols” in the instructional tradition in the intervention site, Kenya, and a training program was designed based on this, “but would encourage the explicit and systematic teaching of letter-sound relationships” (p. 9). This was based on the premise that “[l]earning to read any alphabetic system depends on understanding the relationship between sounds and the letters that represent them” (Reference Brooker and HallidayBrooker & Halliday, 2015, p. 8). This centering of the alphabetic principle is presented with little acknowledgment of cross-linguistic processes or systematic comparisons with the co-occurring local languages such as the syllable-timed Swahili. Similarly, Reference Piper, Zuilkowski and MugendaPiper, Zuilkowski, and Mugenda (2014, p. 14) noted that “existing books on the Kenyan market placed very little emphasis on letters, phonological awareness, or decoding in either language [English or Kiswahili].” Little again is said about how such concepts as phonological awareness might be applicable to literacy in the local languages as well as to literacy in English and that many teachers struggle with these concepts in mixed and other language contexts. Teachers in these contexts find it difficult to naturalize the largely English-based alphabetic principles and teaching methods but perhaps this is because they often have first languages (or know multiple languages) with simple syllable structures and pre-service training without the Anglocentric focus on phoneme-based instructions. These experiential, sociolinguistic, and psycholinguistic debates about literacy development were underrepresented in the studies we reviewed.
16.5 Conclusions and Discussion
Substantial distance still needs to be covered to avoid reductionist knowledge transfer of what works for literacy interventions in the Global South. There is clearly a need to re-explore and address the potential conflicts and differences between new interventions and existing practices. We have seen that RCTs rated for their methodological rigor fall short in their consideration of contextual factors for the design, assessment, and analysis of the intervention results. Insight about the need for cultural sensitivity was often reported after the intervention rather than before the intervention. Drawing from sociocultural and critical sociolinguistic perspectives has enabled us to point not only to these representations but also to the gaps in the representations of linguistic diversity in the contexts of these interventions (see also Nag, Chapter 15 in this volume). These gaps also include failure to account for the relevance of universal features alongside specific features of reading in the local languages. In addition to language diversity, contextual factors may also include local literacy and educational practices and resources (such as folktales, playful activities, games, and riddles) and many other customs that have the potential to affect the viability of an intervention. Taken together, our review has indicated the need to draw from multiple disciplines in the effort to map important contextual factors.
The reform agenda inherent in intervention studies is clearly paralleled by social change agendas espoused by all the different approaches we consulted to develop the framework of analysis of sensitivity to cultural and contextual factors in intervention studies. However, these arguments of change may risk confrontation with the tendency of culture to maintain the status quo and to retain practices in communities against the reform agendas of interventions. Equally important is the role of individual agency in maintaining traditions or introducing changes. These apparent contradictions may need to be reconciled in further research. The value of cultural sensitivity inquiries needs to be acknowledged in positioning these competing tendencies in such a way as to resolve the potential conflicts that may arise from intervening in specific cultural contexts. Doing this creates room to improve the impacts of interventions and their effect on learning, a goal shared by all the studies reviewed here and enshrined in the UN’s SDGs. Combining quantitative and qualitative data in educational experiments and furthering the debate on how far to tolerate hybridity of methods, methodologies, and philosophies within reform agendas for global literacy development are essential in order to ensure sustainable impact.
The contexts of intervention in the high-quality studies reviewed here reveal a complex web of cultural elements which have featured in different sections in the evaluation reports of these interventions. A much more complex picture is also generated by consulting theoretical discussions around contextual factors in multilingual literacy contexts. For example, the “diversification of diversity” alluded to by sociolinguists and linguistic ethnographers provides for linguistic diversity that even goes down to the level of questioning language boundaries and adopts analytical perspectives that center power relations among groups in multilingual contexts. Such levels of analysis are largely absent in the interventions reviewed here. This then leads to the question of what levels of representation of cultural or linguistic diversities are possible or necessary for interventions to achieve cultural sensitivity. It is against this background that we propose that the outcomes of literacy interventions may be better understood if attention is paid to local, contextual factors alongside universal factors. Thus, rather than assuming that supporting the universal within-child factors will provide “certainties” for literacy learning, we argue for intervention evaluations to recognize and record the situated aspect of learning.