Hostname: page-component-848d4c4894-m9kch Total loading time: 0 Render date: 2024-06-09T08:28:18.124Z Has data issue: false hasContentIssue false

A Comparison of Two Quick Methods for Identifying Low-progress Readers: Teacher Judgment Versus Curriculum-based Measurement

Published online by Cambridge University Press:  26 February 2016

Alison Madelaine
Affiliation:
Macquarie University Special Education Centre, Macquarie University, Sydney
Kevin Wheldall*
Affiliation:
Macquarie University Special Education Centre, Macquarie University, Sydney
*
Correspondence concerning this article should be address to: Kevin Wheldall, Director, Macquarie University Special Education Centre, Macquarie University, Sydney. NSW2109, Australia. Phone: (02) 9850 9621. Fax: (02) 9850 9661. E-mail: kevin.wheldall@mq.edu.au

Abstract

Teacher judgment (TJ) is frequently employed as the basis for selecting students in need of specialist help in reading. Two studies are presented in which TJ is compared with a quick alternative deriving from curriculum-based measurement (CBM) that has been shown to be both highly reliable and valid. In the first study, 32 teachers of year two to year six classes were required to categorise their students into the top 25%, middle 50% and bottom 25% for reading performance. Compared with categorisation based on the more objective CBM measure, the mean accuracy of TJ was 67%, varying between 29% and 100%. In the second study, 24 teachers of year one to year five classes were required to categorise ten randomly selected students from their classes into the top three, middle four and bottom three for reading performance. Similar results were obtained with mean accuracy of TJ at 65%, varying between 20% and 100%. Taken together, the findings of the two studies suggest that reliance on TJ for instructional decision-making may be misplaced and that a more objective, quick alternative based on CBM may be preferable.

Type
Research Article
Copyright
Copyright © The Australian Association of Special Education 2002

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Baker, S., & Good, R. (1995). Curriculum-based measurement of English reading with bilingual Hispanic students: A validation study with second-grade students. School Psychology Review, 24, 561578.Google Scholar
Bates, C., & Nettlebeck, T. (2001). Primary school teachers’ judgements of reading achievement. Educational Psychology, 21, 179189.Google Scholar
Coladarci, T. (1986). Accuracy of teacher judgments of student responses to standardized test items. Journal of Educational Psychology, 78, 141146.CrossRefGoogle Scholar
Coladarci, T. (1992). Teachers’ knowledge of what students’ know: The case of reading. Reading Improvement, 29, 3439.Google Scholar
Deno, S. (1987). Curriculum-based measurement. Teaching Exceptional Children, 20, 4142.Google Scholar
Deno, S., Mirkin, P., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 3645.CrossRefGoogle ScholarPubMed
Eaves, R., Williams, P., Winchester, K., & Darch, C. (1994). Using teacher judgment and IQ to estimate reading and mathematics achievement in a remedial-reading program. Psychology in the Schools, 31, 261272.Google Scholar
Fletcher, J., Tannock, R., & Bishop, D. (2001). Utility of brief teacher rating scales to identify children with educational problems. Australian Journal of Psychology, 53, 6371.CrossRefGoogle Scholar
Flynn, J., & Rahbar, M. (1998). Improving teacher prediction of children at risk for reading failure. Psychology in the Schools, 35, 163172.Google Scholar
Freeman, J. (1993). Two factors contributing to elementary school teachers’ predictions of students’ scores on the Gates-MacGinitie Reading Test, Level D. Perceptual and Motor Skills, 76, 536538.Google Scholar
Fuchs, L., & Deno, S. (1992). Effects of curriculum within curriculum-based measurement. Exceptional Children, 232242.Google Scholar
Fuchs, L., Fuchs, D., & Maxwell, L. (1988). The validity of informal reading comprehension measures. Remedial and Special Education, 9, 2028.CrossRefGoogle Scholar
Gresham, F., Reschly, D., & Carey, M. (1987). Teachers as “tests”: Classification accuracy and concurrent validation in the identification of learning disabled children. School Psychology Review, 16, 543553.Google Scholar
Hartman, J., & Fuller, M. (1997). The development of curriculum-based measurement norms in literature-based classrooms. Journal of School Psychology, 35, 377389.CrossRefGoogle Scholar
Hasbrouk, J., & Tindal, G. (1992). Curriculum-based oral reading fluency norms for students in grades 2 through 5. Teaching Exceptional Children, Spring, 4144.CrossRefGoogle Scholar
Hoge, R., & Butcher, R. (1984). Analysis of teacher judgments of pupil achievement levels. Journal of Educational Psychology, 76, 777781.Google Scholar
Hoge, R., & Coladarci, T. (1989). Teacher-based judgments of academic achievement: A review of the literature. Review of Educational Research, 59, 297313.Google Scholar
Jenkins, J., & Jewell, M. (1993). Examining the validity of two measures for formative teaching: Reading aloud and maze. Exceptional Children, 59, 421432.CrossRefGoogle Scholar
Karlsen, B., & Gardner, E. (1985). Stanford Diagnostic Reading Test (3rd ed.). San Antonio, TX: The Psychological Corporation.Google Scholar
Kenny, D., & Chekaluk, E. (1993). Early reading performance: A comparison of teacher-based and test-based assessments. Journal of Learning Disabilities, 26, 227336.CrossRefGoogle ScholarPubMed
Leinhardt, G. (1983). Novice and expert knowledge of individual student’s achievement. Educational Psychologist, 18, 165179.CrossRefGoogle Scholar
Madelaine, A., & Wheldall, K. (1998). Towards a curriculum-based passage reading test for monitoring the performance of low-progress readers using standardised passages: A validity study. Educational Psychology, 18, 471478.Google Scholar
Madelaine, A., & Wheldall, K. (2002a). Can teachers discriminate tow-progress readers from average readers in regular classes? Unpublished manuscript submitted for publication.Google Scholar
Madelaine, A., & Wheldall, K. (2002b). Establishing tentative norms and identifying gender differences in performance for a new passage reading test. Australian Journal of Learning Disabilities, 7, 4045.Google Scholar
Madelaine, A., & Wheldall, K. (2002c). Further progress towards a standardised curriculum-based measure of reading: Calibrating a new passage reading test against the New South Wales Basic Skills Test. Educational Psychology, 22, 461471.Google Scholar
Marston, D., Mirkin, P., & Deno, S. (1984). Curriculum-based measurement: An alternative to traditional screening, referral and identification. The Journal of Special Education, 18, 109117.Google Scholar
Mehrens, W., & Clarizio, H. (1993). Curriculum-based measurement: Conceptual and psychometric considerations. Psychology in the Schools, 30, 241254.Google Scholar
Neale, M. (1988). Neale Analysis of Reading Ability - Revised. Melbourne: Australian Council of Educational Research.Google Scholar
New Zealand Council for Educational Research. (1981). Burt Word Reading Test: New Zealand Revision. Wellington, NZ: Lithoprint (NZ) Ltd.Google Scholar
Patton, C. (1976). Selecting special students: Who decides? Teachers College Record, 78, 101124.Google Scholar
Rodden-Nord, K., & Shinn, M. (1991). The range of reading skills within and across general education classrooms: Contributions to understanding special education for students with mild handicaps. The Journal of Special Education, 24, 441453.Google Scholar
Salvesen, K., & Undheim, J. (1994). Screening for learning disabilities with teacher rating scales. Journal of Learning Disabilities, 27, 6066.Google Scholar
Sharpley, C., & Edgar, E. (1986). Teachers’ ratings vs standardized tests: An empirical investigation of agreement between two indices of achievement. Psychology in the Schools, 23, 106111.Google Scholar
Shinn, M., Tindal, G., & Spira, D. (1987). Special education referrals as an index of teacher tolerance: Are teachers imperfect tests? Exceptional Children, 54, 3240.CrossRefGoogle Scholar
Wheldall, K. (1996). The Wheldall assessment of reading passages (WARP): Experimental edition. Macquarie University Special Education Centre: Unpublished Manuscript.Google Scholar
Wheldall, K., & Beaman, R. (2000). An evaluation of MULTILIT ‘Making Up Lost Time In Literacy’. Sydney: Macquarie University Special Education Centre.Google Scholar
Wheldall, K., & Madelaine, A. (2000). A curriculum-based passage reading test for monitoring the performance of low-progress readers: The development of the WARP. International Journal of Disability, Development and Education, 47, 371382.CrossRefGoogle Scholar
Wilson, M., Schendel, J., & Ulman, J. (1992). Curriculum-based measures, teachers’ ratings, and group achievement scores: Alternative screening measures. Journal of School Psychology, 30, 5976.Google Scholar
Woodcock, R. (1987). Woodcock Reading Mastery Test - Revised. Circle Pines, MN: American Guidance Service.Google Scholar