Hostname: page-component-848d4c4894-75dct Total loading time: 0 Render date: 2024-05-31T09:10:55.125Z Has data issue: false hasContentIssue false

Increased plane identification precision with stereo identification

Published online by Cambridge University Press:  19 June 2023

Junjie Ji
Affiliation:
State Key Laboratory of Tribology, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, P. R. China
Jing-Shan Zhao*
Affiliation:
State Key Laboratory of Tribology, Department of Mechanical Engineering, Tsinghua University, Beijing, 100084, P. R. China
*
Corresponding author: Jing-Shan Zhao; Email: jingshanzhao@mail.tsinghua.edu.cn

Abstract

Stereo vision allows machines to perceive their surroundings, with plane identification serving as a crucial aspect of perception. The accuracy of identification constrains the applicability of stereo systems. Some stereo vision cameras are cost-effective, compact, and user-friendly, resulting in widespread use in engineering applications. However, identification errors limit their effectiveness in quantitative scenarios. While certain calibration methods enhance identification accuracy using camera distortion models, they rely on specific models tailored to a camera’s unique structure. This article presents a calibration method that is not dependent on any particular distortion model, capable of correcting plane position and orientation identified by any algorithm, provided that the identification error is biased. A high-precision mechanical calibration platform is designed to acquire accurate calibration data while using the same detected material in real measurement scenarios. Experimental comparisons confirm the efficacy of plane pose correction on PCL-RANSAC, with the average relative error of distance reduced by 5.4 times and the average absolute error of angle decreasing by 41.2%.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Song, C., M. Niu, Z. Liu, J. Cheng, P. Wang, H. Li and L. Hao, “Spatial-temporal 3D dependency matching with self-supervised deep learning for monocular visual sensing,” Neurocomputing 481, 1121 (2022). doi: 10.1016/j.neucom.2022.01.074.CrossRefGoogle Scholar
Wang, Z., Li, X., Zhang, X., Bai, Y. and Zheng, C., “An attitude estimation method based on monocular vision and inertial sensor fusion for indoor navigation,” IEEE Sens. J. 21(23), 2705127061 (2021). doi: 10.1109/JSEN.2021.3119289.CrossRefGoogle Scholar
Li, Y., Li, J., Yao, Q., Zhou, W. and Nie, J., “Research on predictive control algorithm of vehicle turning path based on monocular vision,” Processes 10(2), 417 (2022). doi: 10.3390/pr10020417.CrossRefGoogle Scholar
Lu, Q., Zhou, H., Li, Z., Ju, X., Tan, S. and Duan, J., “Calibration of five-axis motion platform based on monocular vision,” Int. J. Adv. Manuf. Technol. 118(9-10), 34873496 (2022). doi: 10.1007/s00170-021-07402-x.CrossRefGoogle Scholar
Zimiao, Z., Kai, X., Yanan, W., Shihai, Z. and Yang, Q., “A simple and precise calibration method for binocular vision,” Meas. Sci. Technol. 33(6), 065016 (2022). doi: 10.1088/1361-6501/ac4ce5.CrossRefGoogle Scholar
Zhu, L., Zhang, Y., Wang, Y. and Cheng, C., “Binocular vision positioning method for safety monitoring of solitary elderly,” Comput. Mater. Continua 71(1), 593609 (2022). doi: 10.32604/cmc.2022.022053.Google Scholar
Fang, L., Guan, Z. and Li, J., “Automatic roadblock identification algorithm for unmanned vehicles based on binocular vision,” Wireless Commun. Mobile Comput. 2021, 17 (2021). doi: 10.1155/2021/3333754.Google Scholar
Bonnen, K., Matthis, J. S., Gibaldi, A., Banks, M. S., Levi, D. M. and Hayhoe, M., “Binocular vision and the control of foot placement during walking in natural terrain,” Sci. Rep. 11(1), 20881 (2021). doi: 10.1038/s41598-021-99846-0.CrossRefGoogle ScholarPubMed
Zhao, J. and Allison, R. S., “The role of binocular vision in avoiding virtual obstacles while walking,” IEEE Trans. Visual. Comput. Graphics 27(7), 32773288 (2021). doi: 10.1109/TVCG.2020.2969181.CrossRefGoogle ScholarPubMed
Chen, H. and Cui, W., “A comparative analysis between active structured light and multi-view stereo vision technique for 3D reconstruction of face model surface,” Optik 206, 164190 (2020). doi: 10.1016/j.ijleo.2020.164190.CrossRefGoogle Scholar
Seitz, S. M., Curless, B., Diebel, J., Scharstein, D. and Szeliski, R., “A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms,” In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, vol. 1 (2006) pp. 519528. doi: 10.1109/CVPR.2006.19.CrossRefGoogle Scholar
Liu, L., N. Deng, B. Xin, Y. Wang, W. Wang, Y. He and S. Lu, “Objective evaluation of fabric pilling based on multi-view stereo vision,” J. Textile Inst. 112(12), 19861997 (2021). doi: 10.1080/00405000.2020.1862479.CrossRefGoogle Scholar
Duan, F. and Zhang, Q., “Stereoscopic image feature indexing based on hybrid grid multiple suffix tree and hierarchical clustering,” IEEE Access 8, 2353123541 (2020). doi: 10.1109/ACCESS.2020.2970123.CrossRefGoogle Scholar
Long, Y., Y. Wang, Z. Zhai, L. Wu, M. Li, H. Sun and Q. Su, “Potato volume measurement based on RGB-D camera,” IFAC-PapersOnLine 51(17), 515520 (2018). doi: 10.1016/j.ifacol.2018.08.157.CrossRefGoogle Scholar
Kim, T., Kang, M., Kang, S. and Kim, D., “Improvement of Door Recognition Algorithm Using Lidar and RGB-D Camera for Mobile Manipulator,” In: 2022 IEEE Sensors Applications Symposium (SAS), Sundsvall, Sweden (2022) pp. 16. doi: 10.1109/SAS54819.2022.9881249.CrossRefGoogle Scholar
Shen, B., Lin, X., Xu, G., Zhou, Y. and Wang, X., “A Low Cost Mobile Manipulator for Autonomous Localization and Grasping,” In: 2021 5th International Conference on Robotics and Automation Sciences (ICRAS), Wuhan, China (2021) pp. 193197. doi: 10.1109/ICRAS52289.2021.9476294.CrossRefGoogle Scholar
Backman, K., Kulic, D. and Chung, H., “Learning to assist drone landings,” IEEE Robot. Autom. Lett. 6(2), 31923199 (2021). doi: 10.1109/LRA.2021.3062572.CrossRefGoogle Scholar
Santos, M. C. P., Santana, L. V., Brandao, A. S. and Sarcinelli-Filho, M., “UAV Obstacle Avoidance Using RGB-D System,” In: 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA (2015) pp. 312319. doi: 10.1109/ICUAS.2015.7152305.CrossRefGoogle Scholar
Back, S., Kim, J., Kang, R., Choi, S. and Lee, K., “Segmenting Unseen Industrial Components In A Heavy Clutter Using RGB-D Fusion And Synthetic Data,” In: 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates (2020) pp. 828832. doi: 10.1109/ICIP40778.2020.9190804.CrossRefGoogle Scholar
Damen, D., Gee, A., Mayol-Cuevas, W. and Calway, A., “Egocentric Real-Time Workspace Monitoring Using an RGB-D Camera,” In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal (2012) pp. 10291036. doi: 10.1109/IROS.2012.6385829.CrossRefGoogle Scholar
Sorour, M. T., Abdellatif, M. A., Ramadan, A. A. and Abo-Ismail, A. A., “Development of roller-based interior wall painting robot, 5(11) (2011).Google Scholar
Wilson, S., Potgieter, J. and Arif, K. M., “Robot-assisted floor surface profiling using low-cost sensors,” Remote Sens-BASEL 11(22), 2626 (2019). doi: 10.3390/rs11222626.CrossRefGoogle Scholar
Chen, L., Zhou, J. and Chu, X., “A novel ground plane detection method using an RGB-D sensor,” IOP Conf. Ser. Mater. Sci. Eng. 646(1), 012049 (2019). doi: 10.1088/1757-899X/646/1/012049.CrossRefGoogle Scholar
Liu, X., Zhang, L., Qin, S., Tian, D., Ouyang, S. and Chen, C., “Optimized LOAM using ground plane constraints and segMatch-based loop detection,” Ah S Sens. 19(24), 5419 (2019). doi: 10.3390/s19245419.CrossRefGoogle ScholarPubMed
Guo, M., L. Zhang, X. Liu, Z. Du, J. Song, M. Liu, X. Wu and X. Huo, “3D Lidar SLAM Based on Ground Segmentation and Scan Context Loop Detection,” In: 2021 IEEE 11th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER), Jiaxing, China (2021) pp. 692697. doi: 10.1109/CYBER53097.2021.9588285.CrossRefGoogle Scholar
Fotsing, C., Menadjou, N. and Bobda, C., “Iterative closest point for accurate plane detection in unorganized point clouds,” Autom. Constr. 125, 103610 (2021). doi: 10.1016/j.autcon.2021.103610.CrossRefGoogle Scholar
Ryu, M. W., Oh, S. M., Kim, M. J., Cho, H. H., Son, C. B. and Kim, T. H., “Algorithm for generating 3D geometric representation based on indoor point cloud data,” Appl. Sci. 10(22), 8073 (2020). doi: 10.3390/app10228073.CrossRefGoogle Scholar
Rusu, R. B. and Cousins, S., “3D is here: Point Cloud Library (PCL),” In: 2011 IEEE International Conference on Robotics and Automation, Shanghai, China (2011) pp. 14. doi: 10.1109/ICRA.2011.5980567.CrossRefGoogle Scholar
Neupane, C., Koirala, A., Wang, Z. and Walsh, K. B., “Evaluation of depth cameras for use in fruit localization and sizing: Finding a successor to kinect v2,” Agronomy 11(9), 1780 (2021). doi: 10.3390/agronomy11091780.CrossRefGoogle Scholar
Peng, Y., Zhao, S. and Liu, J., “Segmentation of overlapping grape clusters based on the depth region growing method,” Electronics 10(22), 2813 (2021). doi: 10.3390/electronics10222813.CrossRefGoogle Scholar
Bahnsen, C. H., Johansen, A. S., Philipsen, M. P., Henriksen, J. W., Nasrollahi, K. and Moeslund, T. B., “3D sensors for sewer inspection: A quantitative review and analysis,” Ah S Sens. 21(7), 2553 (2021). doi: 10.3390/s21072553.CrossRefGoogle ScholarPubMed
Bung, D. B., Crookston, B. M. and Valero, D., “Turbulent free-surface monitoring with an RGB-D sensor: The hydraulic jump case,” J. Hydraul. Res. 59(5), 779790 (2021). doi: 10.1080/00221686.2020.1844810.CrossRefGoogle Scholar
Parvis, M., “Using a-priori information to enhance measurement accuracy,” Measurement 12(3), 237249 (1994). doi: 10.1016/0263-2241(94)90030-2.CrossRefGoogle Scholar
Zhang, Z., “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 13301334 (2000). doi: 10.1109/34.888718.CrossRefGoogle Scholar
Darwish, W., Li, W., Tang, S., Wu, B. and Chen, W., “A robust calibration method for consumer grade RGB-D sensors for precise indoor reconstruction,” IEEE Access 7, 88248833 (2019). doi: 10.1109/ACCESS.2018.2890713.CrossRefGoogle Scholar
Li, Y., Li, W., Darwish, W., Tang, S., Hu, Y. and Chen, W., “Improving plane fitting accuracy with rigorous error models of structured light-based RGB-d sensors,” Remote Sens. 12(2), 320 (2020). doi: 10.3390/rs12020320.CrossRefGoogle Scholar
Feng, W., Z. Liang, J. Mei, S. Yang, B. Liang, X. Zhong and J. Xu, “Petroleum pipeline interface recognition and pose detection based on binocular stereo vision,” Processes 10(9), 1722 (2022). doi: 10.3390/pr10091722.CrossRefGoogle Scholar
Fuersattel, P., Placht, S., Maier, A. and Riess, C., “Geometric primitive refinement for structured light cameras,” Mach. Vis. Appl. 29(2), 313327 (2018). doi: 10.1007/s00138-017-0901-z.CrossRefGoogle Scholar
Keselman, L., Woodfill, J. I., Grunnet-Jepsen, A. and Bhowmik, A., “Intel(R) RealSense(TM) Stereoscopic Depth Cameras,” In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA (2017) pp. 12671276. doi: 10.1109/CVPRW.2017.167.CrossRefGoogle Scholar
Huynh, B.-P. and Kuo, Y.-L., “Dynamic filtered path tracking control for a 3RRR robot using optimal recursive path planning and vision-based pose estimation,” IEEE Access 8, 174736174750 (2020). doi: 10.1109/ACCESS.2020.3025952.CrossRefGoogle Scholar
Rong, J., Wang, P., Yang, Q. and Huang, F., “A field-tested harvesting robot for oyster mushroom in greenhouse,” Agronomy 11(6), 1210 (2021). doi: 10.3390/agronomy11061210.CrossRefGoogle Scholar
Li, Z., Tian, X., Liu, X., Liu, Y. and Shi, X., “A two-stage industrial defect detection framework based on improved-yOLOv5 and optimized-inception-resnetV2 models,” Appl. Sci. 12(2), 834 (2022). doi: 10.3390/app12020834.CrossRefGoogle Scholar
Schlett, T., Rathgeb, C. and Busch, C., “Deep learning-based single image face depth data enhancement,” Comput. Vis. Image Understanding 210, 103247 (2021). doi: 10.1016/j.cviu.2021.103247.CrossRefGoogle Scholar
Tadic, V., A. Odry, E. Burkus, I. Kecskes, Z. Kiraly, M. Klincsik, Z. Sari, Z. Vizvari, A. Toth and P. Odry, “Painting path planning for a painting robot with a realSense depth sensor,” Appl. Sci. 11(4), 1467 (2021). doi: 10.3390/app11041467.CrossRefGoogle Scholar
Zeng, H., B. Wang, X. Zhou, X. Sun, L. Huang, Q. Zhang and Y. Wang, “TSFE-net: Two-stream feature extraction networks for active stereo matching,” IEEE Access 9, 3395433962 (2021). doi: 10.1109/ACCESS.2021.3061495.CrossRefGoogle Scholar
Oščádal, P., D. Heczko, A. Vysocký, J. Mlotek, P. Novák, I. Virgala, M. Sukop and Z. Bobovský, “Improved pose estimation of aruco tags using a novel 3D placement strategy,” Ah S Sens. 20(17), 4825 (2020). doi: 10.3390/s20174825.CrossRefGoogle ScholarPubMed
Miknis, M., Davies, R., Plassmann, P. and Ware, A., “Near Real-Time Point Cloud Processing Using the PCL,” In: 2015 International Conference on Systems, Signals and Image Processing (IWSSIP), London, UK (2015) pp. 153156. doi: 10.1109/IWSSIP.2015.7314200.CrossRefGoogle Scholar
Holz, D., Ichim, A. E., Tombari, F., Rusu, R. B. and Behnke, S., “Registration with the point cloud library: A modular framework for aligning in 3-D,” IEEE Robot. Autom. Mag. 22(4), 110124 (2015). doi: 10.1109/MRA.2015.2432331.CrossRefGoogle Scholar