Introduction
The purpose of the present chapter is to provide an account of the evaluation of a program, not so much so that the program may be judged but in order to document the shifting perspectives, false starts, design alterations, obstacles, and difficulties of interpretation that often characterise evaluations.
Evaluation reports that give the impression of a smooth, professional operation that went off without a hitch are far from the mark. Current thinking in evaluation would deny the validity of such impressions. Discussion with other evaluators, my own experience, and established perspectives in the evaluation literature (e.g. Cronbach et al. 1980) suggest that a more realistic picture would present a messy, chaotic series of compromises, where classic research designs disintegrate, where vain hopes of contributing to certain kinds of learning theory are soon dispelled, and where all that lies between the pragmatic evaluator and scholarly perdition is a sense of disciplined inquiry, whether it relates to quasi-experimental research, to a host of naturalistic approaches, or to policy analysis.
The intention of this chapter is to provide food for thought not only for would-be evaluators, but for L2 program developers too, who can assist substantially in the evaluation process in ways that will hopefully become evident in the course of the present account.
In the mid-1980s, I conducted an evaluation of an English language project in Bangalore, India. In this chapter, I will use this evaluation as a focus for discussion as it features many of the issues that are of interest to evaluators.