Dietary assessment has been used for certification to receive food supplements or other nutrition services and to provide feedback for educational purposes. The proportion of individuals correctly certified as eligible is a function of the amount of error that exists in the dietary measures and the level of dietary intake used to establish eligibility. Whether individuals are correctly counselled to increase or decrease the consumption of selected foods or nutrients is a function of the same factors. It is not clear, however, what percentage of individuals would be correctly classified under what circumstances. The objective of this study is to demonstrate the extent to which measurement error and eligibility criteria affect the accuracy of classification.
Hypothetical distributions of dietary intake were generated with varying degrees of measurement error. Different eligibility criteria were applied and the expected classification rates were determined using numerical methods.
Cut points of dietary intake at decreasing levels below the 50th percentile of true intake were associated with lower sensitivity and predictive value positive rates, but higher specificity and predictive value negative rates. The correct classification rates were lower when two cut points of dietary intake were used. Using a single cut point that was higher than the targeted true consumption resulted in higher sensitivity but lower predictive value positive, and lower specificity but higher predictive value negative.
Current methods of dietary assessment may not be reliable enough to attain acceptable levels of correct classification. Policy-makers and educators must consider how much misclassification error they are willing to accept and determine whether more intensive methods are necessary.
Email your librarian or administrator to recommend adding this journal to your organisation's collection.
* Views captured on Cambridge Core between September 2016 - 26th June 2017. This data will be updated every 24 hours.