Hostname: page-component-89b8bd64d-sd5qd Total loading time: 0 Render date: 2026-05-10T22:36:05.750Z Has data issue: false hasContentIssue false

Using pre- and post-survey instruments in interventions: determining the random response benchmark and its implications for measuring effectiveness

Published online by Cambridge University Press:  21 December 2017

George C Davis*
Affiliation:
Department of Human Nutrition, Foods, and Exercise, Virginia Tech University, Blacksburg, VA, USA Department of Agricultural and Applied Economics, Virginia Tech University, 214 Hutcheson Hall, Blacksburg, VA 24061, USA
Ranju Baral
Affiliation:
Global Health Group, University of California San Francisco, Global Health Sciences, San Francisco, CA, USA
Thomas Strayer
Affiliation:
Translational Biology, Medicine, and Health Program, Virginia Tech University, Roanoke, VA, USA
Elena L Serrano
Affiliation:
Department of Human Nutrition, Foods, and Exercise, Virginia Tech University, Blacksburg, VA, USA
*
* Corresponding author: Email georgedavis@vt.edu
Rights & Permissions [Opens in a new window]

Abstract

Objective

The present communication demonstrates that even if individuals are answering a pre/post survey at random, the percentage of individuals showing improvement from the pre- to the post-survey can be surprisingly high. Some simple formulas and tables are presented that will allow analysts to quickly determine the expected percentage of individuals showing improvement if participants just answered the survey at random. This benchmark percentage, in turn, defines the appropriate null hypothesis for testing if the actual percentage observed is greater than the expected random answering percentage.

Design

The analysis is demonstrated by testing if actual improvement in a component of the US Department of Agriculture’s (USDA) Expanded Food and Nutrition Education Program is significantly different from random answering improvement.

Setting

USA.

Subjects

From 2011 to 2014, 364320 adults completed a standardized pre- and post-survey administered by the USDA.

Results

For each year, the statement that the actual number of improvements is less than the expected number if the questions were just answered at random cannot be rejected. This does not mean that the pre-/post-test survey instrument is flawed, only that the data are being inappropriately evaluated.

Conclusions

Knowing the percentage of individuals showing improvement on a pre/post survey instrument when questions are randomly answered is an important benchmark number to determine in order to draw valid inferences about nutrition interventions. The results presented here should help analysts in determining this benchmark number for some common survey structures and avoid drawing faulty inferences about the effectiveness of an intervention.

Information

Type
Short Communication
Copyright
Copyright © The Authors 2017 
Figure 0

Table 1 All possible answer combinations for a pre- and post-survey question with a five-point scale. The shaded area shows improvement events

Figure 1

Table 2 Probabilities of random answering for various survey structures and improvement criterion

Figure 2

Table 3 US Department of Agriculture Expanded Food and Nutrition Education Program food resource management practices: expected v. actual improvement proportions and test results, 2010–2014