Skip to main content Accessibility help
×
Hostname: page-component-848d4c4894-2pzkn Total loading time: 0 Render date: 2024-06-02T14:24:03.650Z Has data issue: false hasContentIssue false

9 - Using the Attribute Hierarchy Method to Make Diagnostic Inferences About Examinees' Cognitive Skills

Published online by Cambridge University Press:  23 November 2009

Mark J. Gierl
Affiliation:
Professor of Educational Psychology, University of Alberta
Jacqueline P. Leighton
Affiliation:
Associate Professor of Educational Psychology, University of Alberta
Stephen M. Hunka
Affiliation:
University Professor of Educational Psychology, University of Alberta
Jacqueline Leighton
Affiliation:
University of Alberta
Mark Gierl
Affiliation:
University of Alberta
Get access

Summary

INTRODUCTION

Many educational assessments are based on cognitive problem-solving tasks. Cognitive diagnostic assessments are designed to model examinees' cognitive performances on these tasks and yield specific information about their problem-solving strengths and weaknesses. Although most psychometric models are based on latent trait theories, a cognitive diagnostic assessment requires a cognitive information processing approach to model the psychology of test performance because the score inference is specifically targeted to examinees' cognitive skills. Latent trait theories posit that a small number of stable underlying characteristics or traits can be used to explain test performance. Individual differences on these traits account for variation in performance over a range of testing situations (Messick, 1989). Trait performance is often used to classify or rank examinees because these traits are specified at a large grain size and are deemed to be stable over time. Cognitive information processing theories require a much deeper understanding of trait performance, where the psychological features of how a trait can produce a performance become the focus of inquiry (cf. Anderson et al., 2004). With a cognitive approach, problem solving is assumed to require the processing of information using relevant sequences of operations. Examinees are expected to differ in the knowledge they possess and the processes they apply, thereby producing response variability in each test-taking situation. Because these knowledge structures and processing skills are specified at a small grain size and are expected to vary among examinees within any testing situation, cognitive theories and models can be used to understand and evaluate specific cognitive skills that affect test performance.

Type
Chapter
Information
Cognitive Diagnostic Assessment for Education
Theory and Applications
, pp. 242 - 274
Publisher: Cambridge University Press
Print publication year: 2007

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

American Educational Research Association (AERA), American Psychological Association, National Council on Measurement in Education. (1999) Standards for educational and psychological testing. Washington, DC: AERA.
Anderson, J.R. (2005). Human symbol manipulation within an integrated cognitive architecture. Cognitive Science, 29, 313–341.CrossRefGoogle ScholarPubMed
Anderson, J.R., Bothell, D., Byrne, M.D., Douglass, S., Lebiere, C., & Qin, Y. (2004). An integrated theory of the mind. Psychological Review, 111, 1036–1060.CrossRefGoogle ScholarPubMed
Anderson, J.R., Reder, L.M., & Simon, H.A. (2000). Applications and Misapplications of Cognitive Psychology to Mathematics Education. Retrieved June 7, 2006, from http://act-r.psy.cmu.edu/publications.
Anderson, J.R., & Shunn, C.D., (2000). Implications of the ACT-R learning theory: No magic bullets. In Glaser, R. (Ed.), Advances in instructional psychology: Educational design and cognitive science (Vol. 5, pp. 1–33). Mahwah, NJ: Erlbaum.Google Scholar
Bransford, J.D., Brown, A.L., & Cocking, R.R. (1999). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press.Google Scholar
Brown, J.S., & Burton, R.R. (1978). Diagnostic models for procedural bugs in basic mathematics skills. Cognitive Science, 2, 155–192.CrossRefGoogle Scholar
Cui, Y., Leighton, J.P., Gierl, M.J., & Hunka, S. (2006, April). A person-fit statistic for the attribute hierarchy method: The hierarchy consistency index. Paper presented at the annual meeting of the National Council on Measurement in Education, San Francisco.Google Scholar
Dawson, M.R.W. (1998). Understanding cognitive science. Malden, MA: Blackwell.Google Scholar
Donovan, M.S., Bransford, J.D., & Pellegrino, J.W. (1999). How people learn: Bridging research and practice. Washington, DC: National Academy Press.Google Scholar
Embretson, S.E. (1999). Cognitive psychology applied to testing. In Durso, F.T., Nickerson, R.S., Schvaneveldt, R.W., Dumais, S.T., Linday, D.S., & Chi, M.T.H. (Eds.), Handbook of applied cognition, (pp. 629–66). New York: Wiley.Google Scholar
Ericsson, K.A., & Simon, H.A. (1993). Protocol analysis: Verbal reports as data. Cambridge, MA: The MIT Press.Google Scholar
Fodor, J.A. (1983). The modularity of mind. Cambridge, MA: MIT Press.Google Scholar
Gierl, M. J., Leighton, J.P., & Hunka, S. (2000). Exploring the logic of Tatsuoka's rule-space model for test development and analysis. Educational Measurement: Issues and Practice, 19, 34–44.CrossRefGoogle Scholar
Gierl, M.J., Bisanz, J., & Li, Y.Y. (2004, April). Using the multidimensionality-based DIF analysis framework to study cognitive skills that elicit gender differences. Paper presented at the annual meeting of the National Council on Measurement in Education, San Diego.Google Scholar
Gierl, M.J., Cui, Y., & Hunka, S. (2007, April). Using connectionist models to evaluate examinees' response patterns on tests using the Attribute Hierarchy Method. Paper presented at the annual meeting of the National Council on Measurement in Education, Chicago.Google Scholar
Glaser, R., Lesgold, A., & Lajoie, S. (1987). Toward a cognitive theory for the measurement of achievement. In Ronning, R.R., Glover, J.A., Conoley, J.C., & Witt, J.C. (Eds.), The influence of cognitive psychology on testing (pp. 41–85). Hillsdale, NJ: Erlbaum.Google Scholar
Goodman, D.P., & Hambleton, R.K. (2004). Student test score reports and interpretative guides: Review of current practices and suggestions for future research. Applied Measurement in Education, 17, 145–220.CrossRefGoogle Scholar
Hunt, E. (1995). Where and when to represent students this way and that way: An evaluation of approaches to diagnostic assessment. In Nichols, P.D., Chipman, S.F., & Brennan, R.L. (Eds.), Cognitively diagnostic assessment (pp. 411–429). Hillsdale, NJ: Erlbaum.Google Scholar
Kuhn, D. (2001). Why development does (and does not occur) occur: Evidence from the domain of inductive reasoning. In McClelland, J.L. & Siegler, R. (Eds.), Mechanisms of cognitive development: Behavioral and neural perspectives (pp. 221–249). Hillsdale, NJ: Erlbaum.Google Scholar
Leighton, J.P. (2004). Avoiding misconceptions, misuse, and missed opportunities: The collection of verbal reports in educational achievement testing. Educational Measurement: Issues and Practice, 23, 6–15.CrossRefGoogle Scholar
Leighton, J.P., & Gierl, M.J. (in press). Defining and evaluating models of cognition used in educational measurement to make inferences about examinees' thinking processes. Educational Measurement: Issues and Practice.Google Scholar
Leighton, J.P., Gierl, M.J., & Hunka, S. (2004). The attribute hierarchy model: An approach for integrating cognitive theory with assessment practice. Journal of Educational Measurement, 41, 205–236.CrossRefGoogle Scholar
Leighton, J.P., & Gokiert, R. (2005, April). The cognitive effects of test item features: Identifying construct irrelevant variance and informing item generation. Paper presented at the annual meeting of the National Council on Measurement in Education, Montréal, Canada.Google Scholar
Messick, S. (1989). Validity. In Linn, R.L. (Ed.), Educational measurement (3rd ed.; pp. 13–103). New York: American Council on Education/Macmillan.Google Scholar
Mislevy, R.J. (1996). Test theory reconceived. Journal of Educational Measurement, 33, 379–416.CrossRefGoogle Scholar
Mislevy, R.J., Steinberg, L.S., & Almond, R.G. (2003). On the structure of educational assessments. Measurement: Interdisciplinary Research and Perspectives, 1, 3–62.Google Scholar
National Research Council. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.
Nichols, P. (1994). A framework of developing cognitively diagnostic assessments. Review of Educational Research, 64, 575–603.CrossRefGoogle Scholar
Nichols, P., & Sugrue, B. (1999). The lack of fidelity between cognitively complex constructs and conventional test development practice. Educational Measurement: Issues and Practice, 18, 18–29.CrossRefGoogle Scholar
Norris, S.P., Leighton, J.P., & Phillips, L.M. (2004). What is at stake in knowing the content and capabilities of children's minds? A case for basing high stakes tests on cognitive models. Theory and Research in Education, 2, 283–308.Google Scholar
Pellegrino, J.W. (1988). Mental models and mental tests. In Wainer, H. & Braun, H.I. (Eds.), Test validity (pp. 49–60). Hillsdale, NJ: Erlbaum.Google Scholar
Pellegrino, J.W. (2002). Understanding how students learn and inferring what they know: Implications for the design of curriculum, instruction, and assessment. In Smith, M.J. (Ed.), NSF K-12 Mathematics and Science Curriculum and Implementation Centers Conference Proceedings (pp. 76–92). Washington, DC: National Science Foundation and American Geological Institute.Google Scholar
Pellegrino, J.W., Baxter, G.P., & Glaser, R. (1999). Addressing the “two disciplines” problem: Linking theories of cognition and learning with assessment and instructional practices. In Iran-Nejad, A. & Pearson, P.D. (Eds.), Review of Research in Education (pp. 307–353). Washington, DC: American Educational Research Association.Google Scholar
Poggio, A., Clayton, D.B., Glasnapp, D., Poggio, J., Haack, P., & Thomas, J. (2005, April). Revisiting the item format question: Can the multiple choice format meet the demand for monitoring higher-order skills? Paper presented at the annual meeting of the National Council on Measurement in Education, Montreal, Canada.Google Scholar
Royer, J.M., Cisero, C.A., & Carlo, M.S. (1993). Techniques and procedures for assessing cognitive skills. Review of Educational Research, 63, 201–243.CrossRefGoogle Scholar
Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.Google Scholar
Snow, R.E., & Lohman, D.F. (1989). Implications of cognitive psychology for educational measurement. In Linn, R.L. (Ed.), Educational measurement (3rd ed., pp. 263–331). New York: American Council on Education/Macmillan.Google Scholar
Taylor, K.L., & Dionne, J-P. (2000). Accessing problem-solving strategy knowledge: The complementary use of concurrent verbal protocols and retrospective debriefing. Journal of Educational Psychology, 92, 413–425.CrossRefGoogle Scholar
Tatsuoka, K.K. (1983). Rule space: An approach for dealing with misconceptions based on item response theory. Journal of Educational Measurement, 20, 345–354.CrossRefGoogle Scholar
Tatsuoka, K.K. (1995). Architecture of knowledge structures and cognitive diagnosis: A statistical pattern recognition and classification approach. In Nichols, P.D., Chipman, S.F., & Brennan, R.L. (Eds.), Cognitively diagnostic assessment (pp. 327–359). Hillsdale, NJ: Erlbaum.Google Scholar
Tatsuoka, M.M., & Tatsuoka, K.K. (1989). Rule space. In Kotz, S. & Johnson, N.L. (Eds.), Encyclopedia of statistical sciences (pp. 217–220). New York: Wiley.Google Scholar
VanderVeen, A.A., Huff, K., Gierl, M., McNamara, D.S, Louwerse, M., & Graesser, A. (in press). Developing and validating instructionally relevant reading competency profiles measured by the critical reading section of the SAT. In McNamara, D.S. (Ed.), Reading comprehension strategies: Theories, interventions, and technologies. Mahwah, NJ: Erlbaum.
Webb, N.L. (2006). Identifying content for student achievement tests. In Downing, S.M. & Haladyna, T.M. (Eds.), Handbook of test development (pp. 155–180). Mahwah, NJ: Erlbaum.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×