Skip to main content
×
Home
    • Aa
    • Aa
  • Get access
    Check if you have access via personal or institutional login
  • Cited by 14
  • Cited by
    This article has been cited by the following publications. This list is generated based on data provided by CrossRef.

    Chen, Jing and Zhang, Mo 2016. Quantitative Psychology Research.


    Lee, Kong Joo and Lee, Gyoung Ho 2015. Automatic Detection of Off-topic Documents using ConceptNet and Essay Prompt in Automated English Essay Scoring. Journal of KIISE, Vol. 42, Issue. 12, p. 1522.


    2015. Beyond the Bubble Test.


    Breyer, F. Jay Attali, Yigal Williamson, David M. Ridolfi-McCulla, Laura Ramineni, Chaitanya Duchnowski, Matthew and Harris, April 2014. A Study of the Use of thee-rater®Scoring Engine for the Analytical Writing Measure of theGRE®revised General Test. ETS Research Report Series, Vol. 2014, Issue. 2, p. 1.


    Higgins, Derrick and Heilman, Michael 2014. Managing What We Can Measure: Quantifying the Susceptibility of Automated Scoring Systems to Gaming Behavior. Educational Measurement: Issues and Practice, Vol. 33, Issue. 3, p. 36.


    Serrano, J. Ignacio del Castillo, M. Dolores and Iglesias, Ángel 2014. Effects of text essay quality on readers’ working memory by a computational model. Biologically Inspired Cognitive Architectures, Vol. 7, p. 39.


    Deane, Paul 2013. On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, Vol. 18, Issue. 1, p. 7.


    Kumar, Niraj and Dey, Lipika 2013. 2013 12th Mexican International Conference on Artificial Intelligence. p. 216.

    Burstein, Jill 2012. The Encyclopedia of Applied Linguistics.


    Li, Yali and Yan, Yonghong 2012. 2012 Fifth International Conference on Intelligent Computation Technology and Automation. p. 65.

    Kakkonen, Tuomo and Sutinen, Erkki 2011. EssayAid: towards a semi-automatic system for assessing student texts. International Journal of Continuing Engineering Education and Life-Long Learning, Vol. 21, Issue. 2/3, p. 119.


    Li, Yali and Yan, Yonghong 2010. 2010 Second International Workshop on Education Technology and Computer Science. p. 94.

    Shermis, M.D. Burstein, J. Higgins, D. and Zechner, K. 2010. International Encyclopedia of Education.


    Spencer, Brenda and Louw, Henk 2008. A practice-based evaluation of an on-line writing evaluation system: First-World technology in a Third-World teaching context. Language Matters, Vol. 39, Issue. 1, p. 111.


    ×

Identifying off-topic student essays without topic-specific training data

  • D. HIGGINS (a1), J. BURSTEIN (a1) and Y. ATTALI (a1)
  • DOI: http://dx.doi.org/10.1017/S1351324906004189
  • Published online: 22 May 2006
Abstract

Educational assessment applications, as well as other natural-language interfaces, need some mechanism for validating user responses. If the input provided to the system is infelicitous or uncooperative, the proper response may be to simply reject it, to route it to a bin for special processing, or to ask the user to modify the input. If problematic user input is instead handled as if it were the system's normal input, this may degrade users' confidence in the software, or suggest ways in which they might try to “game” the system. Our specific task in this domain is the identification of student essays which are “off-topic”, or not written to the test question topic. Identification of off-topic essays is of great importance for the commercial essay evaluation system CriterionSM. The previous methods used for this task required 200–300 human scored essays for training purposes. However, there are situations in which no essays are available for training, such as when users (teachers) wish to spontaneously write a new topic for their students. For these kinds of cases, we need a system that works reliably without training data. This paper describes an algorithm that detects when a student's essay is off-topic without requiring a set of topic-specific essays for training. This new system is comparable in performance to previous models which require topic-specific essays for training, and provides more detailed information about the way in which an essay diverges from the requested essay topic.

Copyright
Footnotes
Hide All
The authors would like to thank Chi Lu and Slava Andreyev for their help in carrying out the experiments described in this paper.
Footnotes
Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

Natural Language Engineering
  • ISSN: 1351-3249
  • EISSN: 1469-8110
  • URL: /core/journals/natural-language-engineering
Please enter your name
Please enter a valid email address
Who would you like to send this to? *
×