Hostname: page-component-77f85d65b8-pkds5 Total loading time: 0 Render date: 2026-03-30T00:41:13.030Z Has data issue: false hasContentIssue false

Automatically testing console I/O behavior of student submissions in Haskell

Published online by Cambridge University Press:  08 September 2025

OLIVER WESTPHAL
Affiliation:
Universität Duisburg-Essen, Germany (e-mail: oliver.westphal@uni-due.de)
JANIS VOIGTLÄNDER
Affiliation:
Universität Duisburg-Essen, Germany (e-mail: janis.voigtlaender@uni-due.de)
Rights & Permissions [Opens in a new window]

Abstract

Good test-suites are an important tool to check the correctness of programs. They are also essential in unsupervised educational settings, like automatic grading or for students to check their solution to some programming task by themselves. For most Haskell programming tasks, one can easily provide high-quality test-suites using standard tools like QuickCheck. Unfortunately, this is no longer the case once we leave the purely functional world and enter the lands of console I/O. Nonetheless, understanding console I/O is an important part of learning Haskell, and we would like to provide students the same support as with other subject matters. The difficulty in testing console I/O programs arises from the standard tools’ lack of support for specifying intended console interactions as simple declarative properties. These interactions are however essential in order to determine whether a program behaves as desired. We describe the console interactions of a program by tracing its text input and output actions. In order to describe which traces match the intended behavior of the program under test, we present a formal specification language. The language is designed to capture interactive behavior found in commonly used textbook exercises and examples, or as much of it as possible, as well as in our own teaching, while at the same time retaining simplicity and clarity of specifications. We intentionally restrict the language, ensuring that expressed behavior is truly interactive and not simply a pure string-builder function in disguise. Based on this specification language, we build a testing framework that allows testing against specifications in an automated way. A central feature of the testing procedure is the use of a constraint solver in order to find meaningful input sequences for the program under test.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Fig. 1: Hand-written correctness checker and input generator.

Figure 1

Fig. 2: Successful matching of trace $\mathop{?{2}}\mathop{?{5}}\mathop{?{3}}\mathop{!{8}}\textit{stop}$.

Figure 2

Fig. 3: Single satisfiable path (with $\textit{len}$ evaluated on the right).

Figure 3

Fig. 4: Additional constraints to avoid Int-overflows.

Figure 4

Fig. 5: Some expressible exercise tasks.

Figure 5

Fig. 6: Syntax of specifications (top) and terms (bottom).

Figure 6

Fig. 7: Loop body effects.

Figure 7

Fig. 8: Trace acceptance.

Figure 8

Fig. 9: Acceptance example(Read natural number and then as many integers but stop on reading 0).

Figure 9

Fig. 10: Solving $\operatorname{\textit{accept}}$ for t with desired input sequence.

Figure 10

Fig. 11: Specification execution (differences to Figure 8 are in gray).

Figure 11

Fig. 12: Evaluating $\operatorname{\textit{gtrace}}$ for the example from Figure 10.

Figure 12

Fig. 13: Specification paths (differences to Figure 8 are in gray).

Figure 13

Fig. 14: Comparing $\operatorname{\textit{gtrace}}$, as per Figure 12, and $\operatorname{\textit{paths}}$.

Submit a response

Discussions

No Discussions have been published for this article.