Hostname: page-component-76fb5796d-qxdb6 Total loading time: 0 Render date: 2024-04-25T08:06:13.064Z Has data issue: false hasContentIssue false

Reconsidering the Measurement of Political Knowledge

Published online by Cambridge University Press:  04 January 2017

Jeffery J. Mondak*
Affiliation:
Florida State University

Abstract

Political knowledge has emerged as one of the central variables in political behavior research, with numerous scholars devoting considerable effort to explaining variance in citizens' levels of knowledge and to understanding the consequences of this variance for representation. Although such substantive matters continue to receive exhaustive study, questions of measurement also warrant attention. I demonstrate that conventional measures of political knowledge—constructed by summing a respondent's correct answers on a battery of factual items—are of uncertain validity. Rather than collapsing incorrect and “don't know” responses into a single absence-of-knowledge category, I introduce estimation procedures that allow these effects to vary. Grouped-data multinomial logistic regression results demonstrate that incorrect answers and don't knows perform dissimilarly, a finding that suggests deficiencies in the construct validity of conventional knowledge measures. The likely cause of the problem is traced to two sources: knowledge may not be discrete, meaning that a simple count of correct answers provides an imprecise measure; and, as demonstrated by the wealth of research conducted in the field of educational testing and psychology since the 1930s, measurement procedures used in political science potentially result in “knowledge” scales contaminated by systematic personality effects.

Type
Research Article
Copyright
Copyright © 1999 by the Society for Political Methodology 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Althaus, Scott. L. 1998. “Information Effects in Collective Preferences.” American Political Science Review 92: 545558.Google Scholar
Bartels, Larry M. 1996. “Uninformed Votes: Information Effects in Presidential Elections.” American Journal of Political Science 40: 194230.Google Scholar
Bennett, Linda L.M., and Bennett, Stephen Earl. 1989. “Enduring Gender Differences in Political Interest.: The Impact of Socialization and Political Dispositions.” American Politics Quarterly 17: 105122.CrossRefGoogle Scholar
Bennett, Stephen Earl. 1989. “Trends in Americans’ Political Information, 1967-1987.” American Politics Quarterly 17: 422435.Google Scholar
Bennett, Stephen Earl. 1988. “‘Know-Nothings’ Revisited: The Meaning of Political Ignorance Today.” Social Science Quarterly 69: 476490.Google Scholar
Calvert, Randall L., and Ferejohn, John A. 1983. “Coattail Voting in Recent Presidential Elections.” American Political Science Review 77: 407419.CrossRefGoogle Scholar
Casey, M. Beth, Nuttall, Ronald L., and Pezaris, Elizabeth. 1997. “Mediators of Gender Differences in Mathematics College Entrance Test Scores: A Comparison of Spatial Skills with Internalized Beliefs and Anxieties.” Developmental Psychology 33: 669680.CrossRefGoogle ScholarPubMed
Cassel, Carol A., and Lo, Celia C. 1997. “Theories of Political Literacy.” Political Behavior 19: 317335.CrossRefGoogle Scholar
Converse, Philip E. 1964. “The Nature of Belief Systems in Mass Publics.” In Ideology and Discontent, ed. Apter, David E. New York: Free Press.Google Scholar
Cronbach, Lee J. 1946. “Response Sets and Test Validity.” Educational and Psychological Measurement 6: 475494.Google Scholar
Carpini, Delli, Michael, X., and Keeter, Scott. 1993. “Measuring Political Knowledge: Putting First Things First.” American Journal of Political Science 37: 11791206.Google Scholar
Carpini, Delli, Michael, X., and Keeter, Scott. 1996. What Americans Know About Politics and Why It Matters. New Haven, CT: Yale University Press.Google Scholar
Fiske, Susan T., Lau, Richard R., and Smith, Richard A. 1990. “On the Varieties and Utilities of Political Expertise.” Social Cognition 8: 3148.Google Scholar
Francis, Joe D., and Busch, Lawrence. 1975. “What We Know About ‘I Don't Knows.’Public Opinion Quarterly 39: 207218.Google Scholar
Goren, Paul. 1997. “Political Expertise and Issue Voting in Presidential Elections.” Political Research Quarterly 50: 387412.Google Scholar
Graber, Doris A. 1996. “Wrong Questions, Wrong Answers: Measuring Political Knowledge.” Paper presented at the annual meeting of the Midwest Political Science Association, Chicago.Google Scholar
Greene, William. H. 1990. Econometric Analysis. New York: Macmillan.Google Scholar
Gritten, Frances, and Johnson, Donald M. 1941. “Individual Differences in Judging Multiple-Choice Questions.” Journal of Educational Psychology 32: 423430.CrossRefGoogle Scholar
Hahn, Carole L. 1996. “Gender and Political Learning.” Theory and Research in Social Education 24: 835.CrossRefGoogle Scholar
Haladyna, Thomas M., and Downing, Steven M. 1993. “How Many Options is Enough for a Multiple-Choice Test Item?Educational and Psychological Measurement 53: 9991010.Google Scholar
Hambleton, Ronald K., and Swaminathan, Hariharan. 1985. Item Response Theory: Principles and Applications. Boston: Kluwer-Nijhoff.Google Scholar
Hirschfeld, Mary, Moore, Robert L., and Brown, Eleanor. 1995. “Exploring the Gender Gap on the GRE Subject Test in Economics.” Journal of Economic Education 26: 315.Google Scholar
Iyengar, Shanto. 1990. “Shortcuts to Political Knowledge: The Role of Selective Attention and Accessibility.” In Information and Democratic Processes, eds. Ferejohn, John A. and Kuklinski, James H. Urbana: University of Illinois Press.Google Scholar
Kline, Paul. 1986. A Handbook of Test Construction. London: Methuen.Google Scholar
Kuklinski, James. H., Quirk, Paul J., Schwieder, David W., and Rich, Robert F. 1998. “‘Just the Facts, Ma'am’: Political Facts and Public Opinion.” Annals of the American Academy of Political and Social Science 560: 143154.CrossRefGoogle Scholar
Lambert, Ronald D., Curtis, James E., Kay, Barry J., and Brown, Steven D. 1988. “The Social Sources of Political Knowledge.” Canadian Journal of Political Science 21: 359374.CrossRefGoogle Scholar
Landrum, R. Eric, Cashin, Jeffrey R., and Theis, Kristina S. 1993. “More Evidence in Favor of Three-Option Multiple-Choice Tests.” Educational and Psychological Measurement 53: 771778.Google Scholar
Luskin, Robert C. 1987. “Measuring Political Sophistication.” American Journal of Political Science 31: 856899.CrossRefGoogle Scholar
Mehrens, William A., and Lehmann, Irvin J. 1984. Measurement and Evaluation in Education and Psychology, 3rd ed. New York: Holt, Rinehart and Winston.Google Scholar
Mondak, Jeffery J. 1995. “Newspapers and Political Awareness.” American Journal of Political Science 39: 513527.Google Scholar
Mondak, Jeffery J. 1999. “Reconsidering the Measurement of Political Knowledge.” Working paper available at the Political Analysis WWW site: http://polmeth.calpoly.edu/pa.html.Google Scholar
Nadeau, Richard, and Niemi, Richard G. 1995. “Educated Guesses: The Process of Answering Factual Knowledge Questions in Surveys.” Public Opinion Quarterly 59: 323346.Google Scholar
Neuman, W. Russell. 1986. The Paradox of Mass Politics: Knowledge and Opinion in the American Electorate. Cambridge, MA: Harvard University Press.Google Scholar
Neuman, W. Russell, Just, Marion R., and Crigler, Ann N. 1992. Common Knowledge: News and the Construction of Political Meaning. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Nunnally, Jum. 1972. Educational Measurement and Evaluation, 2nd ed. New York: McGraw-Hill.Google Scholar
Pettey, Gary R. 1988. “The Interaction of the Individual's Social Environment, Attention and Interest, and Public Affairs Media Use on Political Knowledge Holding.” Communication Research 15: 265281.CrossRefGoogle Scholar
Price, Vincent, and Zaller, John. 1993. “Who Gets the News: Alternative Measures of News Reception and Their Implicatons for Research.” Public Opinion Quarterly 57: 133164.Google Scholar
Rapoport, Ronald B. 1979. “What They Don't Know Can Hurt You.” American Journal of Political Science 23: 805815.CrossRefGoogle Scholar
Rapoport, Ronald B. 1982. “Sex Differences in Attitude Expression: A Generational Explanation.” Public Opinion Quarterly 46: 8696.Google Scholar
Rapoport, Ronald B. 1985. “Like Mother, Like Daughter: Intergenerational Transmission of DK Response Rates.” Public Opinion Quarterly 49: 198208.Google Scholar
Sherriffs, Alex C., and Boomer, Donald S. 1954. “Who Is Penalized by the Penalty for Guessing?Journal of Educational Psychology 45: 8190.Google Scholar
Slakter, Malcolm J. 1969. “Generality of Risk Taking on Objective Examinations.” Educational and Psychological Measurement 29: 115128.Google Scholar
Stanley, Jullian C., and Hopkins, Kenneth D. 1972. Educational and Psychological Measurement and Evaluation. Englewood Cliffs, NJ: Prentice-Hall.Google Scholar
Verba, Sidney, Burns, Nancy, and Schlozman, Kay Lehman. 1997. “Knowing and Caring About Politics: Gender and Political Engagement.” Journal of Politics 59: 10511072.Google Scholar
Weaver, David, and Drew, Dan. 1993. “Voter Learning in the 1990 Off-Year Election: Did the Media Matter?Journalism Quarterly 70: 356368.Google Scholar
Wiley, Llewellyn, and Trimble, Otis C. 1936. “The Ordinary Objective Test as a Possible Criterion of Certain Personality Traits.” School and Society 43: 446448.Google Scholar
Zaller, John. 1990. “Political Awareness, Elite Opinion Leadership, and the Mass Survey Response.” Social Cognition 8: 125153.Google Scholar
Zaller, John. 1992. The Nature and Origins of Mass Opinion. New York: Cambridge University Press.CrossRefGoogle Scholar