Hostname: page-component-89b8bd64d-5bvrz Total loading time: 0 Render date: 2026-05-10T10:12:11.301Z Has data issue: false hasContentIssue false

Criteria for performance evaluation

Published online by Cambridge University Press:  01 January 2023

David J. Weiss*
Affiliation:
California State University, Los Angeles
Kristin Brennan
Affiliation:
California State University, Los Angeles
Rick Thomas
Affiliation:
University of Oklahoma
Alex Kirlik
Affiliation:
University of Illinois
Sarah M. Miller
Affiliation:
University of Illinois
*
* Correspondence regarding this article, including requests for reprints, should be sent to David J. Weiss, 609 Colonial Circle, Fullerton CA. 92835 United States. Email: dweiss@calstatela.edu.
Rights & Permissions [Opens in a new window]

Abstract

Using a cognitive task (mental calculation) and a perceptual-motor task (stylized golf putting), we examined differential proficiency using the CWS index and several other quantitative measures of performance. The CWS index (Weiss & Shanteau, 2003) is a coherence criterion that looks only at internal properties of the data without incorporating an external standard. In Experiment 1, college students (n = 20) carried out 2- and 3-digit addition and multiplication problems under time pressure. In Experiment 2, experienced golfers (n = 12), also college students, putted toward a target from nine different locations. Within each experiment, we analyzed the same responses using different methods. For the arithmetic tasks, accuracy information (mean absolute deviation from the correct answer, MAD) using a coherence criterion was available; for golf, accuracy information using a correspondence criterion (mean deviation from the target, also MAD) was available. We ranked the performances of the participants according to each measure, then compared the orders using Spearman’s rs. For mental calculation, the CWS order correlated moderately (rs =.46) with that of MAD. However, a different coherence criterion, degree of model fit, did not correlate with either CWS or accuracy. For putting, the ranking generated by CWS correlated .68 with that generated by MAD. Consensual answers were also available for both experiments, and the rankings they generated correlated highly with those of MAD. The coherence vs. correspondence distinction did not map well onto criteria for performance evaluation.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
The authors license this article under the terms of the Creative Commons Attribution 3.0 License.
Copyright
Copyright © The Authors [2009] This is an Open Access article, distributed under the terms of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Figure 0

Table 1: Numbers used for addition and multiplication problems within each difficulty level

Figure 1

Table 2: Performance for two individual participants across mental calculation tasks, as assessed by six indices

Figure 2

Table 3: Rank order correlations (Spearman’s rs) between six performance measures on mental calculation tasks

Figure 3

Figure 1: CWS vs. MAD for mental calculation data from nineteen students. Each data point represents the appropriate index-specific average over the six conditions. Spearman’s rs = (–).50, p <.05. In order to avoid distorting the graphical impression of the relationship, we omitted the data from an outlier whose average CWS was much higher than anyone else’s. With that twentieth student included, rs = (–).46, p <.05.

Figure 4

Figure 2: CWS vs. MAD for putting data from twelve golfers. Spearman’s rs = (–).676, p < .05.