We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Passive, wearable sensors can be used to obtain objective information in infant feeding, but their use has not been tested. Our objective was to compare assessment of infant feeding (frequency, duration and cues) by self-report and that of the Automatic Ingestion Monitor-2 (AIM-2).
Design:
A cross-sectional pilot study was conducted in Ghana. Mothers wore the AIM-2 on eyeglasses for 1 d during waking hours to assess infant feeding using images automatically captured by the device every 15 s. Feasibility was assessed using compliance with wearing the device. Infant feeding practices collected by the AIM-2 images were annotated by a trained evaluator and compared with maternal self-report via interviewer-administered questionnaire.
Setting:
Rural and urban communities in Ghana.
Participants:
Participants were thirty eight (eighteen rural and twenty urban) breast-feeding mothers of infants (child age ≤7 months).
Results:
Twenty-five mothers reported exclusive breast-feeding, which was common among those < 30 years of age (n 15, 60 %) and those residing in urban communities (n 14, 70 %). Compliance with wearing the AIM-2 was high (83 % of wake-time), suggesting low user burden. Maternal report differed from the AIM-2 data, such that mothers reported higher mean breast-feeding frequency (eleven v. eight times, P = 0·041) and duration (18·5 v. 10 min, P = 0·007) during waking hours.
Conclusion:
The AIM-2 was a feasible tool for the assessment of infant feeding among mothers in Ghana as a passive, objective method and identified overestimation of self-reported breast-feeding frequency and duration. Future studies using the AIM-2 are warranted to determine validity on a larger scale.
Accurate measurements of food volume and density are often required as ‘gold standards’ for calibration of image-based dietary assessment and food database development. Currently, there is no specialised laboratory instrument for these measurements. We present the design of a new volume of density (VD) meter to bridge this technological gap.
Design:
Our design consists of a turntable, a load sensor, a set of cameras and lights installed on an arc-shaped stationary support, and a microcomputer. It acquires an array of food images, reconstructs a 3D volumetric model, weighs the food and calculates both food volume and density, all in an automatic process controlled by the microcomputer. To adapt to the complex shapes of foods, a new food surface model, derived from the electric field of charged particles, is developed for 3D point cloud reconstruction of either convex or concave food surfaces.
Results:
We conducted two experiments to evaluate the VD meter. The first experiment utilised computer-synthesised 3D objects with prescribed convex and concave surfaces of known volumes to investigate different food surface types. The second experiment was based on actual foods with different shapes, colours and textures. Our results indicated that, for synthesised objects, the measurement error of the electric field-based method was <1 %, significantly lower compared with traditional methods. For real-world foods, the measurement error depended on the types of food volumes (detailed discussion included). The largest error was approximately 5 %.
Conclusion:
The VD meter provides a new electronic instrument to support advanced research in nutrition science.
Current approaches to food volume estimation require the person to carry a fiducial marker (e.g. a checkerboard card), to be placed next to the food before taking a picture. This procedure is inconvenient and post-processing of the food picture is time-consuming and sometimes inaccurate. These problems keep people from using the smartphone for self-administered dietary assessment. The current bioengineering study presents a novel smartphone-based imaging approach to table-side estimation of food volume which overcomes current limitations.
Design
We present a new method for food volume estimation without a fiducial marker. Our mathematical model indicates that, using a special picture-taking strategy, the smartphone-based imaging system can be calibrated adequately if the physical length of the smartphone and the output of the motion sensor within the device are known. We also present and test a new virtual reality method for food volume estimation using the International Food Unit™ and a training process for error control.
Results
Our pilot study, with sixty-nine participants and fifteen foods, indicates that the fiducial-marker-free approach is valid and that the training improves estimation accuracy significantly (P<0·05) for all but one food (egg, P>0·05).
Conclusions
Elimination of a fiducial marker and application of virtual reality, the International Food Unit™ and an automated training allowed quick food volume estimation and control of the estimation error. The estimated volume could be used to search a nutrient database and determine energy and nutrients in the diet.
To develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment.
Design
To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable device, called eButton, from free-living individuals. Three thousand nine hundred images containing real-world activities, which formed eButton data set 1, were manually selected from thirty subjects. eButton data set 2 contained 29 515 images acquired from a research participant in a week-long unrestricted recording. They included both food- and non-food-related real-life activities, such as dining at both home and restaurants, cooking, shopping, gardening, housekeeping chores, taking classes, gym exercise, etc. All images in these data sets were classified as food/non-food images based on their tags generated by a convolutional neural network.
Results
A cross data-set test was conducted on eButton data set 1. The overall accuracy of food detection was 91·5 and 86·4 %, respectively, when one-half of data set 1 was used for training and the other half for testing. For eButton data set 2, 74·0 % sensitivity and 87·0 % specificity were obtained if both ‘food’ and ‘drink’ were considered as food images. Alternatively, if only ‘food’ items were considered, the sensitivity and specificity reached 85·0 and 85·8 %, respectively.
Conclusions
The AI technology can automatically detect foods from low-quality, wearable camera-acquired real-world egocentric images with reasonable accuracy, reducing both the burden of data processing and privacy concerns.
The eButton takes frontal images at 4s intervals throughout the day. A three-dimensional manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional images. The present paper reports a test of the inter-rater reliability and validity of use of the wire mesh procedure.
Design
Seventeen foods of diverse shapes and sizes served on plates, bowls and cups were selected to rigorously test the portion assessment procedure. A dietitian not involved in inter-rater reliability assessment used standard cups to independently measure the quantities of foods to generate the ‘true’ value for a total of seventy-five ‘served’ and seventy-five smaller ‘left’ images with diverse portion sizes.
Setting
The images appeared on the computer to which the digital wire meshes were applied.
Subjects
Two dietitians and three engineers independently estimated portion size of the larger (‘served’) and smaller (‘left’) images for the same foods.
Results
The engineers had higher reliability and validity than the dietitians. The dietitians had lower reliabilities and validities for the smaller more irregular images, but the engineers did not, suggesting training could overcome this limitation. The lower reliabilities and validities for foods served in bowls, compared with plates, suggest difficulties with the curved nature of the bowls.
Conclusions
The wire mesh procedure is an important step forward in quantifying portion size, which has been subject to substantial self-report error. Improved training procedures are needed to overcome the identified problems.
Accurate estimation of food portion size is of paramount importance in dietary studies. We have developed a small, chest-worn electronic device called eButton which automatically takes pictures of consumed foods for objective dietary assessment. From the acquired pictures, the food portion size can be calculated semi-automatically with the help of computer software. The aim of the present study is to evaluate the accuracy of the calculated food portion size (volumes) from eButton pictures.
Design
Participants wore an eButton during their lunch. The volume of food in each eButton picture was calculated using software. For comparison, three raters estimated the food volume by viewing the same picture. The actual volume was determined by physical measurement using seed displacement.
Setting
Dining room and offices in a research laboratory.
Subjects
Seven lab member volunteers.
Results
Images of 100 food samples (fifty Western and fifty Asian foods) were collected and each food volume was estimated from these images using software. The mean relative error between the estimated volume and the actual volume over all the samples was −2·8 % (95 % CI −6·8 %, 1·2 %) with sd of 20·4 %. For eighty-five samples, the food volumes determined by computer differed by no more than 30 % from the results of actual physical measurements. When the volume estimates by the computer and raters were compared, the computer estimates showed much less bias and variability.
Conclusions
From the same eButton pictures, the computer-based method provides more objective and accurate estimates of food volume than the visual estimation method.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.