Hostname: page-component-7bb8b95d7b-5mhkq Total loading time: 0 Render date: 2024-09-20T12:58:04.192Z Has data issue: false hasContentIssue false

Teaching reading through Direct Instruction: A role for educational psychologists?

Published online by Cambridge University Press:  23 June 2020

Kerry Hempenstall*
Affiliation:
School of Education, RMIT University, Melbourne, Victoria, Australia
*
Author for correspondence: Kerry Hempenstall, Email: kerry.hempenstall@rmit.edu.au
Get access

Abstract

Educational psychologists can play a number of roles within education settings. They are often called upon to assist with the assessment and treatment of disability issues, student behaviour and mental health problems, parent and teacher liaison, and counselling, to name a few. Less frequently pursued is an active role in establishing and evaluating both general classroom and remedial literacy instruction. A lack of success in the literacy domain can have far-reaching effects on students’ educational and social and emotional development. Further, it has been noted in national and international reports that the accumulated evidence for effective literacy instruction has not had the impact on policy that it deserves. Educational psychologists are well placed to assist schools to develop an evidence-based perspective that can provide a marked improvement in the literacy development of students. One such model with a long research history is Direct Instruction. This article will describe the model, and consider how it might be profitably employed in schools.

Type
Articles
Copyright
© Australian Psychological Society Ltd, 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Closing the Gap in Literacy

For quite some time it has been clear that our education system has been unable to close the gap between high and low achievers, despite regular increases in funding (Holden & Zhang, Reference Holden and Zhang2018), but the resultant community concern has heightened and broadened in recent times. Three findings, in particular, have produced significant concern. The Australian Industry Group (2016) reported that 44% of Australians had reading skills below the minimum required to cope with the demands of workplaces and society. Recent scores on Australia’s National Assessment Program Literacy and Numeracy (NAPLAN) have shown that the achievement gap has even widened (Goss & Sonnemann, Reference Goss and Sonnemann2016; Holden, Reference Holden2019; Holden & Zhang, Reference Holden and Zhang2018). Further, international assessment through the Program for International Student Assessment (PISA) found that the reading performance of Australia’s adolescents has declined since 2000 (Thomson, De Bortoli, Underwood, & Schmid, Reference Thomson, De Bortoli, Underwood and Schmid2019).

There has long been an unfortunate disconnect between empirically derived research findings on literacy and the practices typically occurring in our classrooms (Hempenstall, Reference Hempenstall2006). Education has a history of regularly adopting new ideas, but it has done so without the widescale assessment and scientific research that is necessary to distinguish effective from ineffective reforms. This absence of a scientific perspective has precluded systematic improvement in the education system, and it has impeded growth in the teaching profession for a long time (Anwaruddin, Reference Anwaruddin2015; Carnine, Reference Carnine1995; Hempenstall, Reference Hempenstall1996).

As other articles in this special issue have made clear, there is a great deal known about the development of reading skill and the means of effectively teaching its constituent elements. This knowledge seems to exist in a parallel universe, having yet to make its way into the teaching profession in a meaningful way. A number of factors relevant to this situation have been explored. For example, educational policies over many years have lacked an evidence-based culture. Carnine (Reference Carnine1991) viewed educational policy makers as lacking any scientific framework and inclined to accept proposals based on good intentions and unsupported opinions. In more recent years, policy makers have begun to exhort the profession to take into account the research findings (e.g., National Inquiry into the Teaching of Literacy, 2005; New South Wales Parliament Legislative Council, 2020); however, many teacher education faculties have yet to modify their programs to reflect this policy shift. Buckingham (Reference Buckingham2019) reported several disturbing findings regarding initial teacher education. Just 4% of the 116 literacy units in reviewed education faculties included a specific focus on early reading instruction. Further, only 6% of those units included all five of the recognised elements of effective early literacy instruction.

This absence of emphasis on beginning reading has led to new teachers having little knowledge of how optimally to teach reading (Fielding-Barnsley, Reference Fielding-Barnsley2010; Hammond, Reference Hammond2015; Stephenson, Reference Stephenson2018). Studies of both preservice and inservice teachers have noted a lack of knowledge about early literacy, and also personal literacy concerns among a higher than expected number of teachers (Cohen, Mather, Schneider, & White, Reference Cohen, Mather, Schneider and White2017; Meeks & Kemp, Reference Meeks and Kemp2017). Those in the latter group are doubly impeded in that their low level of understanding of the structure of language and personal literacy difficulties combine to preclude effective instruction in the phonological processes central to evidence-based initial reading instruction (Stark, Snow, Eadie, & Goldfeld, Reference Stark, Snow, Eadie and Goldfeld2016).

So, the evidence indicates that the Australian education system is not doing enough to ensure all students have the opportunity to reach their potential. For example, there is a gap in student achievement between advantaged and disadvantaged schools that increases between Year 3 and Year 9 from 1 year 3 months to 3 years 8 months. Across Victorian schools there is an average attainment gap between the top students and low progress students of 7 years (Goss & Sonnemann, Reference Goss and Sonnemann2016).

Although literacy research has had little penetration into Australian classrooms, there are exceptions. ‘About 5% of schools are routinely doing better than we would expect given their student population mix’ (Goss, Reference Goss2018). Some of these schools are disadvantaged.

These schools tend to have several discernible characteristics:

  1. 1. School discipline. Based on high expectations, a clear set of consistently applied classroom rules, and a centralised school behaviour policy.

  2. 2. Direct and explicit instruction. New content is explicitly taught in sequenced and structured lessons. It includes clear lesson objectives, immediate feedback, reviews of content from previous lessons, unambiguous language, frequent checking of student understanding, demonstration of the knowledge or skill to be learnt, and students practising skills with teacher guidance.

  3. 3. Experienced and autonomous school leadership. Stable, long-term school leadership, and principal autonomy to select staff and control school budgets.

  4. 4. Data-informed practice. Using data from teacher-written, NAPLAN and PAT assessments to improve teaching, track student progress, and facilitate intervention for underachieving students.

  5. 5. Teacher collaboration and professional learning. Collaboration among teachers and specialist support staff to cater for the often complex needs of disadvantaged students, with a focus on teacher professional learning, involving peer observations, mentoring, and attending practical professional development activities that help refine literacy and numeracy instruction.

  6. 6. Comprehensive early reading instruction. Including five necessary elements of reading instruction: Phonemic Awareness, Phonics, Fluency, Vocabulary, and Comprehension. (Joseph, Reference Joseph2019, p. 2).

These characteristics are uncommon in our schools, and their adoption is unlikely to occur without expert encouragement and support, for reasons outlined earlier. This vacuum provides a potential role for educational and developmental psychologists in the school system. Apart from Characteristic 3, these school qualities represent a rich vein of opportunity for our practitioners to play an important role in school improvement and thereby student attainment. Because of their training, which differs from that provided in education courses generally, psychologists enthusiastically seek out research-based practices, are practised in dealing with data, and collaboration is a strong part of their work style.

The main focus of this article is on Characteristics 2 and 6, which emphasise direct and explicit instruction and early reading, although 4 and 5 also offer a rich and related field for psychologists. There is ample evidence that the explicit approach to reading instruction is under-utilised despite being superior to other forms of instruction for basic skill development, especially for low progress students (Alfieri, Brooks, Aldrich, & Tenenbaum, Reference Alfieri, Brooks, Aldrich and Tenenbaum2010; Archer & Hughes, Reference Archer and Hughes2011; Clark, Kirschner, & Sweller, Reference Clark, Kirschner and Sweller2012). Apart from the definition provided in Joseph (Reference Joseph2019), a teacher characterised direct and explicit as: ‘Instruction, and I don’t mean facilitation, I mean stand-up-in-front-of-the-class, put-it-on-the-board, do-it-this-way-because-it-works-best, practise-until-you’ve-got-it-right instruction.’

Direct Instruction (DI) programs were among the first to incorporate explicit teaching as their delivery system, but additionally they provide carefully structured curriculum content to maximise the impact on student outcomes. The DI reading programs, such as Reading Mastery, have been acknowledged as exemplars of the National Reading Panel’s five major components of early reading instruction listed by Joseph (Stockard & Engelmann, Reference Stockard and Engelmann2010).

The DI model has a relatively long history in reading education, the first program (DISTAR Reading) having been published in 1969, and new programs have been published up to the present day. Initially, the programs were intended for struggling students, but others are designed for whole classes. The most frequently used emphasise reading, spelling, language, writing, and maths. For information about any of these programs, see National Institute for Direct Instruction (2018).

The programs share a common teaching style readily observable to any classroom visitor. The instruction usually takes place in small groups (4–15) with a teacher directing activities with the aid of a script, and students actively involved in responding to a fast-paced lesson during which they receive constant teacher communication, questions, and feedback. Choral (unison) responding during part of a lesson keeps students on track and provides feedback to the teacher if any students are falling behind. Also noticeable is the frequent use of a model-lead-test sequence during instruction. Some may know it as ‘I do, we do, you do’. Programs are designed according to what, not whom, is to be taught. Thus, all children work through the same sequence of tasks directed by a teacher using the same teaching strategies. Individual differences are accommodated through different entry points, reinforcement, amounts of practice, and correction strategies (Gregory, Reference Gregory1983).

Assumptions

DI programs extend beyond the principles of explicit instruction and into the curriculum domain (Becker, Reference Becker1977). It is assumed that all children can learn and be taught; thus, failure to learn is viewed as failure to teach effectively (Engelmann, Reference Engelmann1980). Children whose progress is delayed must be taught to learn faster because catchup at a later stage is very arduous. This acceleration is achieved through a focus on features designed to improve the efficiency of instruction. So, DI is not simply a program of what to teach, but also how to teach each element.

Teaching Methodology

Curriculum is designed with the goal of ‘faultless instruction’ (Engelmann, Reference Engelmann1980, Engelmann & Carnine, Reference Engelmann and Carnine1982), that is, sequences or routines for which there is only one logical interpretation. The designer’s brief is to avoid ambiguity in instruction — the focus is on logical-analysis principles. These principles enable the organisation of concepts according to their structure. It is intended to answer the question: what types of student-teacher interactions or methods lead to the most student development while using the fewest resources?

Engelmann (Reference Engelmann1980) highlighted four design principles:

  1. (1) Where possible, teach a general case. That is, those skills, which when mastered, can be applied across a range of problems for which specific solutions have not been taught; for example, teaching letter-sound correspondences and phonological skills, such as blending, to enable the decoding of regular words. The generalisations may be taught inductively by examples only, or deductively by providing a rule and a range of examples to define the rule’s boundaries.

  2. (2) Teach the essentials. The essentials are determined by an analysis of the skills necessary to achieve the desired objective. There is an underlying assertion that it is possible to achieve skilled reading by a task analysis and the teaching of subskills within a cumulative framework.

  3. (3) Keep errors to a minimum. Errors are considered counterproductive and time-wasting. For struggling learners, a high success rate is beneficial in building and maintaining the motivation often lost through a history of failure. A low error rate is achieved by the use of the instructional design principles first described in Theory of Instruction (Engelmann & Carnine, Reference Engelmann and Carnine1982), and by ensuring students have the preskills needed to commence any program (via a placement test). Thus, the programs are very detailed and precisely crafted. To reduce variability in teacher presentation all lessons are scripted, and all programs are field-tested and revised prior to publication. Engelmann described the preeminent feature of DI as the orchestration of detail in program design and presentation. This involves ‘picky details of how the tasks are formulated, how the example sets are designed, how the details of lessons are organized and sequenced from lesson to the next so that only about 10–15% of each lesson presents brand new material, and how exercises are designed so they are unambiguous about details of the content’ (Engelmann, Reference Engelmann2004, para 4). Scripted programs are employed because they help control instructional delivery, thereby increasing fidelity of implementation (Plavnick, Marchand-Martella, Martella, Thompson, & Wood, Reference Plavnick, Marchand-Martella, Martella, Thompson and Wood2015).

  4. (4) Adequate practice. DI programs include the requirement for mastery learning (usually above 90% mastery). Students continue to focus on a given task until the mastery criterion is reached. The objective of this strategy is the achievement of retention. The practice schedule includes massed practice, shifting to a spaced schedule. The amount of practice on a given skill decreases as the relevant skill becomes incorporated into more complex skills. Advocates of DI argue that this feature of instruction is particularly important for low-achieving students and is too often given scant regard (Engelmann, Reference Engelmann1980). While this emphasis on practice may be unfashionable, there is considerable supporting research, and a number of effective schools are increasingly endorsing its importance (Rist, Reference Rist1992): ‘The strategies that have fallen out of style, such as memorising, reciting and drilling, are what we need to do. They’re simple — but fundamental — things that make complex thinking possible’ (p. 19). More recent research has reinforced that point (Megherbi, Elbro, Oakhill, Segui, & New, Reference Megherbi, Elbro, Oakhill, Segui and New2018).

In line with current research findings, effective early reading programs tend to incorporate these four design principles. There is an emphasis on the essential areas for beginners: letter-sound relationships and phonological blending. Teaching these generalisable components within a cumulative framework enables early success in decoding regular words, even those not before encountered. Review and practice are addressed systematically. Each of these areas is particularly critical for struggling students, and is paid careful attention in the program design to reduce errors and help promote mastery.

Factors in Success

In addition to the above overarching principles, there are crucial factors in program implementation.

Grouping

The teachers’ manuals recommend group sizes between 4 and 15 across the various reading programs. Engelmann, Becker, Carnine, and Gersten (Reference Engelmann, Becker, Carnine and Gersten1988) portrayed DI programs as individual programs presented in a group format. For this efficiency element to succeed, the teacher must observe each student’s response to every question.

So, the choral responding must be precise to enable the detection and teacher correction of errors. The extent to which teachers can do this successfully depends upon several factors, such as their hearing acuity, ability and determination to ensure their students achieve truly choral responding, and the group size.

The vigilance provided by teachers in attending to student responses is a major defence against any student’s failure in the program. It is an area in which training and monitoring of any teachers inexperienced in the approach should be a priority. Additionally, smaller group sizes have larger effects on vulnerable students (Suggate, Reference Suggate2016).

Time and Intensity of Instruction

An element contributing to the impressive gains almost certainly involves the duration and intensity of the intervention. Longer interventions allow for greater content coverage and adequate practice. Program intensity involves a combination of lesson length, lesson density, and lesson frequency. For example, lesson length for the Corrective Reading program is about 40–60 minutes. This period allows for a reasonable content coverage in each session and for the integration of new knowledge into existing knowledge structures (Rosenshine, Reference Rosenshine2002). As the programs involve a cumulative subskills approach to reading, the introduction of new skills, the practising of recently acquired skills, and the (vitally important) amalgamation of these with the already established core requires careful lesson planning within programs and sufficient time for this amalgamation to occur.

Program density involves the extent to which students are actively engaged in learning during the lesson time. Various concepts, such as time-on-task, academic engaged time, and academic learning time have been employed to address the issue of student engagement. As an instructional issue it relates to the manner in which program design evokes high rates of student engagement.

Another element of lesson density involves the proportion of correct to incorrect responses. Students who struggle with reading require high rates of success if they are to adopt new strategies, transfer new skills across tasks, and persevere with new strategies. Teachers have commented on the high success rates achieved daily through both careful lesson design and student placement at the appropriate program level (Schug, Tarver, & Western, Reference Schug, Tarver and Western2001). The author counted 300 responses from a student in a 10-minute word attack segment of a Corrective Reading: Decoding lesson. This represents a very high rate of student engagement; additionally, the success rate was above 90%.

Lesson frequency appears to be important, perhaps because of the need for spaced practice of newly mastered skills. It has been noted that students, particularly those at risk, readily forget what they have learned when lesson frequency is low (Rosenshine, Reference Rosenshine1986). If this occurs, additional time is spent in relearning rather than in new learning and incorporation activities. Alternatively, teachers may ignore some student errors in the interests of maintaining lesson pace, thereby condemning some to failure. Frustration and disengagement are the possible negative outcome of underscheduling. The program guidelines usually recommend five lessons per week, although this may not achieved by all schools. The effect of variable frequency impacts most notably on the students most at risk. They are the students most likely to lose hard-won gains through forgetting (National Reading Panel, 2000; Swanson, Reference Swanson2001). The total contact hours are also relevant — for example, each of the levels of the Corrective Reading program entails about 50 hours of instruction.

Priority on Academic Learning

A related issue concerns the priority that schools assign to assisting readers with difficulty. At one level, this relates to the number of social issues schools are expected to address within a school day. A consequence can be that even if lessons are scheduled at a rate of five per week, this may rarely be achieved. Some schools may schedule four sessions, but average little above three sessions, a rate that seriously jeopardises the program’s likelihood of success.

In-Vivo Coaching

Engelmann et al.’s (Reference Engelmann, Becker, Carnine and Gersten1988) experience has been that, without safeguards, less than 30% of the skills practised outside the classroom will be evident subsequently in classrooms. Thus, the provision of in-vivo coaching was found to be especially important for the acquisition of skills. Glang and Gersten (Reference Glang and Gersten1987) commented on the value for teachers in seeing how their own students responded to the expert instructional techniques presented by the visiting supervisor. Unfortunately, this level of coaching support is rarely available in our educational settings. The issue of coaching is increasingly being raised given the expectation that teachers will be required to adopt evidence-based approaches (Hammond & Moore, Reference Hammond and Moore2018).

Fidelity

High fidelity implementation means that you get a program with an internal design and follow that design. That would include using the materials in a particular sequence, adhering to the amount of time and practice called for by the program, and following the recommendations for grouping or reteaching students. It would mean using all the essential components as they are designed, including differentiated instructional time and program assessments (Diamond, Reference Diamond2004).

DI program effectiveness is predicated on implementation fidelity. In particular, departures from the program, such as omitting individual turn-taking, or specific tasks may have a significant effect on the average group progress. These ‘creative’ modifications are likely to interfere most with the progress of the most vulnerable students, for it is they who adapt least easily to ambiguous or incomplete instructional sequences. ‘Our analyses revealed that overall fidelity of implementation accounted for 22% of the variance in the gains in basic reading skills and 18% of the passage comprehension gains of middle school students with reading difficulties’ (Benner, Nelson, Stage, & Ralston, Reference Benner, Nelson, Stage and Ralston2011, p. 85).

Progress Monitoring

In a cumulative curriculum, it is essential that all tasks are mastered if students (especially the vulnerable) are to make progress. The in-built continuous progress evaluation is valuable in quickly detecting individual or group difficulty at any point. Incorporated in programs are both daily monitoring of errors, and regularly scheduled mastery tests. It is through these program features that problems of low progress can be addressed, and students spared the fate of participating in an ineffectual educational process. This inclusion of continuous observation and testing has been shown in independent research to enhance retention in addition to its role in monitoring progress (Adesope, Trevisan, & Sundararajan, Reference Adesope, Trevisan and Sundararajan2017).

Evaluation of the Direct Instruction Model

Surprisingly little serious attention has been paid to DI from both the education and wider educational research communities, despite its strong history of supportive empirical evidence (McMullen & Madelaine, Reference McMullen and Madelaine2014). As with explicit instruction in general, DI has been criticised, especially by those of more constructivist persuasion. For an extended discussion, see Hempenstall (Reference Hempenstall2013).

Follow Through

A major study was federally funded in the United States in the late 1960s, arising because of a concern about the poor educational outcomes for disadvantaged students. Entitled Follow Through (Engelmann et al., Reference Engelmann, Becker, Carnine and Gersten1988; Grossen, Reference Grossen1996), it was aimed at the first three years of school, and was designed to determine which methods of teaching would be most effective for disadvantaged students throughout their primary school career. It was a huge study, involving 75,000 children in 180 communities over the first three years of their school life. It is the largest educational experiment ever undertaken, extending from 1967 to 1995, at a cost of almost a billion dollars. There were nine major competing sponsors covering a broad range of educational philosophies. They included child-directed learning, individualised instruction, language experience, learning styles, self-esteem development, cognitive emphasis, parent-based teaching, DI, and behavioural teaching. The models can be reduced to three distinct themes — those emphasising either basic academic outcomes, cognitive development, or affective development. The targeted basic skills included reading, language, spelling, writing, and maths. The models that emphasised the systematic teaching of basic skills (DI and Behaviour Analysis) performed best. In reading, the DI model, which also has a strong phonic emphasis, had the most impressive results in both academic and affective areas. Apart from the basic skills models, all others produced more negative than positive outcomes on measures in the basic skill domain.

Follow-up studies were performed 3, 6, and 9 years after the DI students had completed Follow Through. They showed strong consistent long-term benefits in reading (Gersten, Keating, & Becker, Reference Gersten, Keating and Becker1988), effects that were evidenced in higher achievement, fewer grade retentions, and more university acceptances than in comparison groups that had traditional education in the same communities. For more reading on Follow Through, see National Institute for Direct Instruction (2013).

Subsequently, meta-analyses documenting the effectiveness of DI were reported by White (Reference White1988), who reported an overall effect size of .84, and by Adams and Engelmann (Reference Adams and Engelmann1996) with effect size of .87. A report from the American Institutes for Research (1999), An Educators’ Guide to School-wide Reform, found that only three programs, DI among them, had adequate evidence of effectiveness in reading instruction. However, the approach has never been accorded the attention that might have been expected.

Recently, a paper published in the Review of Educational Research, ‘The Effectiveness of Direct Instruction Curricula: A Meta-Analysis of a Half Century of Research’ (Stockard, Wood, Coughlin, & Khoury, Reference Stockard, Wood, Coughlin and Khoury2018), outlined and analysed the long history of research into the effectiveness of the various DI programs.

Quantitative mixed models were used to examine literature published from 1966 through 2016 on the effectiveness of DI. Analyses were based on 328 studies involving 413 study designs and almost 4000 effects. Results are reported for the total set and subareas regarding reading, math, language, spelling, and multiple or other academic subjects; ability measures; affective outcomes; teacher and parent views; and single-subject designs. All of the estimated effects were positive and all were statistically significant except results from metaregressions involving affective outcomes. Characteristics of the publications, methodology, and sample were not systematically related to effect estimates. Effects showed little decline during maintenance, and effects for academic subjects were greater when students had more exposure to the programs. Estimated effects were educationally significant, moderate to large when using the traditional psychological benchmarks, and similar in magnitude to effect sizes that reflect performance gaps between more and less advantaged students (Stockard et al., Reference Stockard, Wood, Coughlin and Khoury2018, p. 1).

These outcomes are impressive given the wide range of study designs, sample sizes, educational domains, and evaluation tools employed across the studies. Although there were variations across programs, effect size for the total sample was .60, with the 95% confidence interval within .54 to .66.

John Hattie (Reference Hattie2009) reached broadly similar conclusions about the size of effect:

One of the common criticisms is that Direct Instruction works with very low-level or specific skills, and with lower ability and the youngest students. These are not the findings from the meta-analyses. The effects of Direct Instruction are similar for regular (d = 0.99), and special education and lower ability students (d = 0.86), higher for reading (d = 0.89) than for mathematics (d = 0.50), similar for the more low-level word attack (d = 0.64) and also for high-level comprehension (d = 0.54), and similar for elementary and high school students. The messages of these meta-analyses on Direct Instruction underline the power of stating the learning intentions and success criteria, and then engaging students in moving towards these. The teacher needs to invite the students to learn, provide much deliberative practice and modeling, and provide appropriate feedback and multiple opportunities to learn. Students need opportunities for independent practice, and then there need to be opportunities to learn the skill or knowledge implicit in the learning intention in contexts other than those directly taught. (Hattie, Reference Hattie2009, pp. 206–207)

Apart from evaluating the DI programs’ effects on student outcomes, there have also been performed many evaluations of the diverse instructional features underpinning DI programs, including: preteaching to low progress students, signalling, group size, use of overt steps, sequencing of positive and negative examples, pacing, correction procedures, and massed and spaced practice. The bibliography of writings on DI reports 44 such papers (National Institute for Direct Instruction, 2017). The database on Theories on Instruction & Learning comprises 48 papers.

Why Might Educational and Developmental Psychologists Consider Becoming Involved With These Programs in School Settings?

As a group, psychologists are interested in student welfare, and reading attainment plays a huge role in enabling students to make the most of their years of schooling. Additionally, failure of student reading development has been associated with a range of mental health and behaviour issues. Studies have noted an increased risk for both internalising and externalising problems (Boyes, Leitao, Claessen, Badcock, & Nayton, Reference Boyes, Leitao, Claessen, Badcock and Nayton2016; Katzir, Young-Suk, & Dotan, Reference Katzir, Young-Suk and Dotan2018; Russell, Ryder, Norwich, & Ford, Reference Russell, Ryder, Norwich and Ford2015). My own role as an educational psychologist in schools was initially dominated by referrals of student behavioural issues. After dealing with an endless stream of such referrals in both primary and secondary schools, it became apparent that a very high proportion of these students also had significant reading problems. Obviously, not all behaviour problems are a consequence of low progress in reading, but I was surprised at the level of comorbidity. Shifting some of my attention to their academic progress was fulfilling and enabled having a positive influence on a larger number of students in the schools in which I consulted.

Educational policies are beginning to reflect a better understanding of what is possible in school settings to enhance the outcomes for more students. However, schools are being asked for more than they have been trained to deliver. For example, Joseph (Reference Joseph2019) noted the use of data as important in school improvement. It is becoming mandatory in policies in decision making, in progress monitoring and program evaluation, yet it is foreign to most teachers as their training has not emphasised its role. Psychologists speak data, and can assist schools in dealing with these new responsibilities.

Evidence-Based Practice in School at Last?

In the report of the Teacher Education Ministerial Advisory Group (2014), evidence-based appears 31 times, and one recommendation was that: ‘The theory, methods and practices taught to pre-service teachers need to be clearly based on evidence linked to impact on student learning outcomes’ (p. 18). How are current teachers to know how to make sense of educational research findings, to separate the wheat from the chaff? Psychologists, because of their training, can assist in this task.

The DI model has some administrative features that make it an attractive option: lessons fit readily into a school timetable; their completeness relieves schools from developing their own curricula; and the clearly defined skill objectives and associated mastery tests make reporting to parents a simple task. Additionally, many teachers express a lack of confidence in individually addressing the problems of the at-risk reader, expressing a sense that they have been insufficiently trained to deal with them. Who better than psychologists to oppose the numerous pseudo-scientific education programs that abound, and also to counteract those teacher consultants who continue to provide seminars and training in moribund literacy strategies. In the reading domain, this involves attention to the empirical literature that addresses the five main components of effective instruction, including individual program evaluations. As practitioners, we need to be able to discern between dross and gold, and to effectively communicate relevant findings to teachers and administrators.

Our education system ultimately will require teachers to become researchers, understand the evidence, become collaborators, and collect data to enhance their effectiveness. What a great opportunity this represents for educational and developmental psychologists!

Acknowledgments

None.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Conflicts of interest

None.

Ethical standards

Not relevant to this study

References

Adams, G., & Engelmann, S. (1996). Research on Direct Instruction: 25 years beyond DISTAR. Seattle, WA: Educational Achievement Systems.Google Scholar
Adesope, O.O., Trevisan, D.A., & Sundararajan, N. (2017). Rethinking the use of tests: A meta-analysis of practice testing. Review of Educational Research, 87, 659701.CrossRefGoogle Scholar
Alfieri, L., Brooks, P.J., Aldrich, N.J., & Tenenbaum, H.R. (2010). Does discovery-based instruction enhance learning? Journal of Educational Psychology, 103, 118.10.1037/a0021017CrossRefGoogle Scholar
American Institutes for Research. (1999). An educators’ guide to schoolwide reform. https://eric.ed.gov/?id=ED460429 Google Scholar
Anwaruddin, S.M. (2015). Teachers’ engagement with educational research: Toward a conceptual framework for locally-based interpretive communities. Education Policy Analysis Archives, 23, 125. http://dx.doi.org/10.14507/epaa.v23.1776 Google Scholar
Archer, A.L., & Hughes, C.A. (2011). Explicit instruction: Effective and efficient teaching. New York, NY: The Guilford Press.Google Scholar
Australian Industry Group. (2016). Tackling foundation skills in the workforce. http://cdn.aigroup.com.au/Reports/2016/AIG9675_EMAIL.pdf Google Scholar
Becker, W.C. (1977). Teaching reading and language to the disadvantaged. What we have learned from field research. Harvard Educational Review, 47, 518543.10.17763/haer.47.4.51431w6022u51015CrossRefGoogle Scholar
Benner, G.J., Nelson, J.R., Stage, S.A., & Ralston, N.C. (2011). The influence of fidelity of implementation on the reading outcomes of middle school students experiencing reading difficulties. Remedial and Special Education, 32, 7988.CrossRefGoogle Scholar
Boyes, M.E., Leitao, S., Claessen, M., Badcock, N.A., & Nayton, M. (2016). Why are reading difficulties associated with mental health problems? Dyslexia, 22, 263266.CrossRefGoogle ScholarPubMed
Buckingham, J. (2019). Graduate teachers are short changed on evidence-based reading instruction. Nomanis, 8, 1213.Google Scholar
Carnine, D. (1991). Curricular interventions for teaching higher order thinking to all students: Introduction to the special series. Journal of Learning Disabilities, 24, 261269.10.1177/002221949102400502CrossRefGoogle ScholarPubMed
Carnine, D. (1995). Trustworthiness, useability, and accessibility of educational research. Journal of Behavioral Education, 5, 251258.10.1007/BF02110314CrossRefGoogle Scholar
Clark, R.E., Kirschner, P.A., & Sweller, J. (2012, March 23). Putting students on the path to learning: The case for fully guided instruction. American Educator. http://www.aft.org/pdfs/americaneducator/spring2012/Clark.pdf Google Scholar
Cohen, R., Mather, N., Schneider, D., & White, J. (2017). A comparison of schools: Teacher knowledge of explicit code-based reading instruction. Reading and Writing, 30, 653690.CrossRefGoogle Scholar
Diamond, L. (2004). High fidelity — It’s all about instructional materials. An interview with Linda Diamond of CORE. https://www.corelearn.com/files/HighFidelity.pdf Google Scholar
Engelmann, S. (1980). Toward the design of faultless instruction: The theoretical basis of concept analysis. Educational Technology, 20, 2836.Google Scholar
Engelmann, S. (2004). Prologue to the Dalmatian and its spots: Why research-based recommendations fail Logic 101. http://www.zigsite.com/DalmatianPro.htm Google Scholar
Engelmann, S., & Carnine, D. (1982). Theory of instruction. New York: Irvington.Google Scholar
Engelmann, S., Becker, W.C., Carnine, D., & Gersten, R. (1988). The Direct Instruction Follow Through model: Design and outcomes. Education and Treatment of Children, 11, 303317.Google Scholar
Fielding-Barnsley, R. (2010). Australian pre-service teachers’ knowledge of phonemic awareness and phonics in the process of learning to read. Australian Journal of Learning Difficulties, 15, 99110.CrossRefGoogle Scholar
Gersten, R.M., Keating, T., & Becker, W. (1988). The continued impact of the Direct Instruction Model: Longitudinal studies of Follow Through students. Education and Treatment of Children, 11, 318327.Google Scholar
Glang, A., & Gersten, R. (1987, Winter). Coaching teachers. Direct Instruction News, 1, 4, 5, 7.Google Scholar
Goss, P. (2018, May 15). Five things we wouldn’t know without NAPLAN. The Conversation. https://theconversation.com/five-things-we-wouldnt-know-without-naplan-94286 Google Scholar
Goss, P., & Sonnemann, J. (2016). Widening gaps: What NAPLAN tells us about student progress. Grattan Institute. https://grattan.edu.au/report/widening-gaps/ Google Scholar
Gregory, R.P. (1983). Direct Instruction, disadvantaged and handicapped children: A review of the literature and some practical implications. Parts 1 & 2. Remedial Education, 18, 108114, 130–136.Google Scholar
Grossen, B. (Ed.). (1996). What was that Project Follow Through? Effective School Practices, 15, 1–85.Google Scholar
Hammond, L. (2015). Early childhood educators’ perceived and actual metalinguistic knowledge, beliefs and enacted practice about teaching early reading. Australian Journal of Learning Difficulties, 3, 853864.Google Scholar
Hammond, L., & Moore, W.M. (2018). Teachers taking up explicit instruction: The impact of a professional development and directive instructional coaching model. Australian Journal of Teacher Education, 43, 110133.CrossRefGoogle Scholar
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Abingdon, UK: Routledge.Google Scholar
Hempenstall, K. (1996). The gulf between educational research and policy: The example of Direct Instruction and whole language. Behaviour Change, 13, 3346.CrossRefGoogle Scholar
Hempenstall, K. (2006). What does evidence-based practice in education mean? Australian Journal of Learning Disabilities, 11, 8392.10.1080/19404150609546811CrossRefGoogle Scholar
Holden, R. (2019, December 6). Vital signs: Australia’s slipping student scores will lead to greater income inequality. The Conversation. https://theconversation.com/vital-signs-australias-slipping-student-scores-will-lead-to-greater-income-inequality-128301 Google Scholar
Holden, R., & Zhang, J. (2018). The economic impact of improving regional, rural & remote education in Australia: Closing the human capital gap. http://research.economics.unsw.edu.au/richardholden/assets/gonski-report-final.pdf Google Scholar
Joseph, B. (2019). Overcoming the odds: A study of Australia’s top-performing disadvantaged schools. Centre for Independent Studies, RR39. https://www.cis.org.au/publications/research-reports/overcoming-the-odds-a-study-of-australias-top-performing-disadvantaged-schools/ Google Scholar
Katzir, T., Young-Suk, G.K., & Dotan, S. (2018). Reading self-concept and reading anxiety in second grade children: The roles of word reading, emergent literacy skills, working memory and gender. Frontiers in Psychology, 9, 113 CrossRefGoogle ScholarPubMed
McMullen, F., & Madelaine, A. (2014). Why is there so much resistance to Direct Instruction? Australian Journal of Learning Difficulties, 19, 137151.CrossRefGoogle Scholar
Meeks, L., & Kemp, C.R. (2017). How well prepared are Australian preservice teachers to teach early reading skills? Australian Journal of Teacher Education, 42, 117.CrossRefGoogle Scholar
Megherbi, H., Elbro, C., Oakhill, J.V., Segui, J., & New, B. (2018). The emergence of automaticity in reading: Effects of orthographic depth and word decoding ability on an adjusted Stroop measure. Journal of Experimental Child Psychology, 166, 652663.CrossRefGoogle Scholar
National Inquiry into the Teaching of Literacy. (2005). Teaching reading — A review of the evidence-based research literature on approaches to the teaching of literacy, particularly those that are effective in assisting students with reading difficulties. Canberra, Australia: Department of Education, Science and Training.Google Scholar
National Institute for Direct Instruction. (2017). DI component analysis. In Writings on Direct Instruction: A bibliography (p. 180). https://www.nifdi.org/docman/research/bibliography/205-di-bibliography-reference-list/file Google Scholar
National Institute for Direct Instruction. (2018). Information about the individual DI programs. https://www.nifdi.org/programs/about-the-programs Google Scholar
National Reading Panel. (2000 ). Report of the National Reading Panel. National Institute of Child Health and Development.Google Scholar
New South Wales Parliament Legislative Council. (2020). Report 40. Measurement and outcome-based funding in New South Wales schools informed by the data: Evidence-based education in NSW. https://www.parliament.nsw.gov.au/lcdocs/inquiries/2539/PC3%20-%20Final%20Report%20-%20Measurement%20and%20outcome%20based%20funding%20in%20NSW%20schools%20-%2018%20February%202020.pdf Google Scholar
Plavnick, J., Marchand-Martella, N., Martella, R., Thompson, J., & Wood, A.L. (2015). A review of explicit and systematic scripted instructional programs for students with autism spectrum disorder. Review Journal of Autism and Developmental Disorders, 2, 5566.CrossRefGoogle Scholar
Rist, M.C. (1992). Learning by heart. The Executive Educator, November, 1219.Google Scholar
Rosenshine, B.V. (1986). Synthesis of research on explicit teaching. Educational Leadership, 43, 6069.Google Scholar
Rosenshine, B.V. (2002). Helping students from low-income homes read at grade level. Journal for Students Placed at Risk, 7, 273283.10.1207/S15327671ESPR0702_9CrossRefGoogle Scholar
Russell, G., Ryder, D., Norwich, B., & Ford, T. (2015). Behavioural difficulties that co-occur with specific word reading difficulties: A UK population-based cohort study. Dyslexia, 21, 123141.CrossRefGoogle ScholarPubMed
Schug, M., Tarver, S., & Western, R. (2001). Direct instruction and the teaching of early reading. Wisconsin Policy Research Institute Report, 14, 129.Google Scholar
Stark, H.L., Snow, P.C., Eadie, P.A., & Goldfeld, S.R. (2016). Language and reading instruction in early years classrooms: The knowledge and self-rated ability of Australian teachers. Annals of Dyslexia, 66, 2854.CrossRefGoogle ScholarPubMed
Stephenson, J. (2018). A systematic review of the research on the knowledge and skills of Australian preservice teachers. Australian Journal of Teacher Education, 43, 121137.CrossRefGoogle Scholar
Stockard, J., & Engelmann, K. (2010). The development of early academic success: The impact of Direct Instruction’s reading mastery. Journal of Behavior Assessment and Intervention in Children, 1, 224.CrossRefGoogle Scholar
Stockard, J., Wood, T.W., Coughlin, C., & Khoury, C.R. (2018). The effectiveness of Direct Instruction curricula: A meta-analysis of a half century of research. Review of Educational Research, 88, 479507.CrossRefGoogle Scholar
Suggate, S.P. (2016). A meta-analysis of the long-term effects of phonemic awareness, phonics, fluency, and reading comprehension interventions. Journal of Learning Disabilities, 49, 7796.CrossRefGoogle ScholarPubMed
Swanson, H.L. (2001). Research on interventions for adolescents with learning disabilities: A meta-analysis of outcomes related to higher-order processing. The Elementary School Journal, 101, 331348.CrossRefGoogle Scholar
Teacher Education Ministerial Advisory Group. (2014). Action now: Classroom ready teachers. https://www.aitsl.edu.au/tools-resources/resource/action-now-classroom-ready-teachers Google Scholar
Thomson, S., De Bortoli, L., Underwood, C., & Schmid, M. (2019). PISA 2018 in Brief: I. Student performance. https://research.acer.edu.au/ozpisa/34 Google Scholar
White, W.A.T. (1988). A meta-analysis of the effects of direct instruction in special education. Education & Treatment of Children, 11, 364374.Google Scholar