Hostname: page-component-89b8bd64d-r6c6k Total loading time: 0 Render date: 2026-05-09T07:29:52.115Z Has data issue: false hasContentIssue false

Using item response theory to address vulnerabilities in FFQ

Published online by Cambridge University Press:  13 September 2017

Josh B. Kazman*
Affiliation:
Department of Military and Emergency Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD 20814-4712, USA
Jonathan M. Scott
Affiliation:
Department of Military and Emergency Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD 20814-4712, USA
Patricia A. Deuster
Affiliation:
Department of Military and Emergency Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD 20814-4712, USA
*
* Corresponding author: J. B. Kazman, fax +1 301 295 6773, email Josh.kazman.ctr@usuhs.edu
Rights & Permissions [Opens in a new window]

Abstract

The limitations for self-reporting of dietary patterns are widely recognised as a major vulnerability of FFQ and the dietary screeners/scales derived from FFQ. Such instruments can yield inconsistent results to produce questionable interpretations. The present article discusses the value of psychometric approaches and standards in addressing these drawbacks for instruments used to estimate dietary habits and nutrient intake. We argue that a FFQ or screener that treats diet as a ‘latent construct’ can be optimised for both internal consistency and the value of the research results. Latent constructs, a foundation for item response theory (IRT)-based scales (e.g. Patient Reported Outcomes Measurement Information System) are typically introduced in the design stage of an instrument to elicit critical factors that cannot be observed or measured directly. We propose an iterative approach that uses such modelling to refine FFQ and similar instruments. To that end, we illustrate the benefits of psychometric modelling by using items and data from a sample of 12 370 Soldiers who completed the 2012 US Army Global Assessment Tool (GAT). We used factor analysis to build the scale incorporating five out of eleven survey items. An IRT-driven assessment of response category properties indicates likely problems in the ordering or wording of several response categories. Group comparisons, examined with differential item functioning (DIF), provided evidence of scale validity across each Army sub-population (sex, service component and officer status). Such an approach holds promise for future FFQ.

Information

Type
Full Papers
Copyright
Copyright © The Authors 2017 
Figure 0

Fig. 1 Conceptual model for most psychometric scales (model A) and for FFQ (model B).

Figure 1

Fig. 2 Item characteristic curves for item 3, whole grains, based on the six response categories (0=‘rarely or never’; 1=‘1 or 2 times/week’; 2=‘3–6 times/week’; 3=‘1 time/d’; 4=‘2 times/d’; 5=‘3 or more times/d’), based on the nominal model (a, which does not impose a response category order) and the Graded Response Model (b, which does impose a response category order).

Figure 2

Table 1 Item parameters(Item parameter estimates with their standard errors)

Figure 3

Fig. 3 Test information as a function of healthy eating (θ) for the entire scale (left), the fruit item (middle) and the water item (right).

Supplementary material: File

Kazman et al supplementary material

Appendix

Download Kazman et al supplementary material(File)
File 18.2 KB