As increasing numbers of research papers in applied linguistics, language learning, and assessment use discourse analysis techniques to assess accuracy in performance, it is timely to examine at a detailed level the wide variety of measures employed. Ideally, measures need to capture accuracy in as valid and reliable a way as possible, but this has proved elusive. In this article, we systematically review the variety of different measures in used in these fields, both global and local, before presenting a more finely tuned weighted clause ratio measure which classifies errors at different levels, that is, those that seriously impede communication, those that impair communication to some degree, and those that do not impair communication at all. The problem of reliably identifying these levels is discussed, followed by an analysis of samples from written and spoken second language performance data. This new measure, grounded in a comprehensive review of prior practice in the field, has the advantages of being relatively easy to use, measuring accuracy rather than error, and evaluating smaller increases in improved performance than have previously been possible.