Hostname: page-component-76dd75c94c-8c549 Total loading time: 0 Render date: 2024-04-30T09:09:31.722Z Has data issue: false hasContentIssue false

Tangential-force detection ability of three-axis fingernail-color sensor aided by CNN

Published online by Cambridge University Press:  27 March 2023

Keisuke Watanabe
Affiliation:
Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, Japan
Yandong Chen
Affiliation:
Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, Japan
Hiraku Komura
Affiliation:
Faculty of Engineering, Kyushu Institute of Technology, 1-1 Sensui-cho, Tobata-ku, Kitakyushu, Fukuoka, Japan
Masahiro Ohka*
Affiliation:
Graduate School of Informatics, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, Aichi, Japan
*
*Corresponding author. E-mail: ohka@i.nagoya-u.ac.jp

Abstract

We create a new tactile recording system with which we develop a three-axis fingernail-color sensor that can measure a three-dimensional force applied to fingertips by observing the change of the fingernail’s color. Since the color change is complicated, the relationships between images and three-dimensional forces were assessed using convolution neural network (CNN) models. The success of this method depends on the input data size because the CNN model learning requires big data. Thus, to efficiently obtain big data, we developed a novel measuring device, which was composed of an electronic scale and a load cell, to obtain fingernail images with 0$^\circ$ to 360$^\circ$ directional tangential force. We performed a series of evaluation experiments to obtain movies of the color changes caused by the three-axis forces and created a data set for the CNN models by transforming the movies to still images. Although we produced a generalized CNN model that can evaluate the images of any person’s fingernails, its root means square error (RMSE) exceeded both the whole and individual models, and the individual models showed the smallest RMSE. Therefore, we adopted the individual models, which precisely evaluated the tangential-force direction of the test data in an $F_x$-$F_y$ plane within around $\pm$2.5$^\circ$ error at the peak points of the applied force. Although the fingernail-color sensor possessed almost the same level of accuracy as previous sensors for normal-force tests, the present fingernail-color sensor acts as the best tangential sensor because the RMSE obtained from tangential-force tests was around 1/3 that of previous studies.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Techtile Tool Kit, Available online: http://www.techtile.org/en/techtiletoolkit/ Google Scholar
Nakatani, M., Kakehi, Y., Minamizawa, K., Mihara, S. and Tachi, S., “TECHTILE workshop for sharing haptic experience,” TVRSJ 19(4), 593603 (2014), (in Japanese).Google Scholar
Suzuki, Y. and Suzuki, R.. Tactile Score/A Knowledge Media for Tactile Sense (Springer, Tokyo, Japan, 2014) pp. 2129.CrossRefGoogle Scholar
Ohka, M., Mitsuya, Y., Higashioka, I. and Kabeshita, H., “An experimental optical three-axis tactile sensor for micro-robots,” Robotica 23(4), 457465 (2005).CrossRefGoogle Scholar
Ohka, M., Nomura, R., Yussof, H. and Zaharu, N. I., “Development of Human-finger-sized Three-axis Tactile Sensor Based on Image Data Processing,” In: The 9th International Conference on Sensing Technology (Auckland, New Zealand, 2015) pp. 212216.CrossRefGoogle Scholar
Ohka, M., Komura, H., Watanabe, K. and Nomura, R., “Two experimental devices for record and playback of tactile data,” Philosophies 6(3), 54 (2021). doi: 10.3390/philosophies6030054.CrossRefGoogle Scholar
Sagisaka, T., Ohmura, Y., Nagakubo, A., Kuniyoshi, Y. and Ozaki, K., “High-density conformable tactile sensing glove,” J. Robot. Soc. Jpn. 30(7), 711727 (2012), (in Japanese).CrossRefGoogle Scholar
Krizhevsky, A., Sutskever, I. and Hinton, G. H., “ImageNet classification with deep convolutional neural networks,” Adv. Neur. Inf. Proc. Syst. 25(2), 8490 (2012). doi: 10.1145/3065386.Google Scholar
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R., “Dropout: A simple way to prevent neural networks from overfitting,” J. Mach. Learn. Res. 15, 19291958 (2014).Google Scholar
Khan, A., Sohail, A., Zahoora, U. and Qureshi, A. S., “A survey of the recent architectures of deep convolutional neural networks,” Artif. Intell. Rev. 53(8), 54555516 (2020). doi: 10.1007/s10462-020-09825-6.CrossRefGoogle Scholar
Nomura, Y. and Maeda, T., “The study of fingernail sensors for measuring finger forces and bending,” TVRSJ 6(3), 215220 (2001), (in Japanese).Google Scholar
Mascaro, S. A. and Asada, H. H., “Measurement of finger posture and three-axis fingertip touch using fingernail sensors,” IEEE Trans. Robot. Autom. 20(1), 711717 (2004).CrossRefGoogle Scholar
Hinatsu, S., Yoshimoto, S., Kuroda, Y. and Oshiro, O., “Estimation of fingertip contact force by plethysmography in proximal part of finger,” Med. Biol. Eng. 53(3), 115124 (2017), (in Japanese).Google Scholar
Nakatani, M., Kawaue, T., Shiojima, K., Kotetsu, K., Kinoshita, S. and Wada, J., “Wearable Contact Force Sensor System Based on Fingerpad Deformation,” In: 2011 IEEE World Haptics Conference (Istanbul, Turkey, 2011) pp. 323328.CrossRefGoogle Scholar
Sakuma, K., Abrami, A., Blurmrosen, G., Lukashov, S., Narayanan, R., Lingman, J. W., Gaggiano, V. and Heisig, S. J., “Wearable nail deformation sensing for behavioral and biomechanical monitoring and human-computer interaction,” Sci. Rep., 111 (2018).Google ScholarPubMed
Grieve, T., Lincolon, L., Sun, Y., Hollerbach, J. M. and Mascaro, S. A., “3D Force Prediction Using Fingernail Imaging with Automated Calibration,” In: 2010 IEEE Haptics Symposium (Waltham, MA, 2010) pp. 113120.CrossRefGoogle Scholar
Adbulnabi, A. H., Wang, G. and Jia, K., “Multi-task CNN model for attribute prediction,” IEEE Trans. Multimedia 17(11), 19491959 (2015).Google Scholar
Simonyan, K. and Zisserman, A., “Very deep convolutional networks for large-scale image recognition” (2015). Available online: https://arxiv.org/abs/1409.1556.Google Scholar
Maeda, Y., Sekine, M., Tamura, T., Suzuki, T. and Kameyama, K., “Position dependency in photoplethysmographic sensor – comparison of adjoining PPG signals in light sources and measurement sites,” J. Life Support Eng. 23(3), 124129 (2011).CrossRefGoogle Scholar
OpenCV/Color conversions. Available online: https://docs.opencv.org/3.4/de/d25/imgproc_color_conversions.html (accessed on 21st, Feb., 2023).Google Scholar
Kingma, D. P. and Ba, J. L., “Adam: A method for stochastic optimization” (2017). Available online: https://arxiv.org/abs/1412.6980.Google Scholar