Hostname: page-component-7dd5485656-k8lnt Total loading time: 0 Render date: 2025-10-27T01:53:56.762Z Has data issue: false hasContentIssue false

A novel online time calibration framework via double-stage EKF for visual-inertial odometry

Published online by Cambridge University Press:  28 October 2024

Shuaiyong Li*
Affiliation:
School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
Jiawei Nie
Affiliation:
School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
Chengchun Guo
Affiliation:
School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
Yang Yang
Affiliation:
School of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
Lin Mei
Affiliation:
Chongqing Special Equipment Inspection and Research Institute, Chongqing, China
*
Corresponding author: Shuaiyong Li; Email: lishuaiyong@cqupt.edu.cn

Abstract

There is an unavoidable time offset between the camera stream and the inertial measurement unit (IMU) data due to the sensor triggering and transmission delays, which will seriously affect the accuracy of visual-inertial odometry (VIO). A novel online time calibration framework via double-stage EKF for VIO is proposed in this paper. First, the first-stage complementary Kalman filter is constructed by adapting the complementary characteristics between the accelerometer and the gyroscope in the IMU, where the rotation result predicted by the gyroscope is corrected through the measurement of the accelerometer so that the IMU can output a more accurate initial pose. Second, the unknown time offset is added to the state vector of the VIO system. The estimated pose of IMU is used as the prediction information, and the reprojection error of multiple cameras on the same feature point is used as the constraint information. During the operation of the VIO system, the time offset is continuously calculated and superimposed on the IMU timestamp to obtain the data synchronized by the IMU and the camera. The Schur complement model is used to marginalize the camera state that carries less information in the system state, avoiding the loss of prior information between images, and improving the accuracy of camera pose estimation. Finally, the effectiveness of proposed algorithm is verified using the EuRoC dataset and the real experimental data.

Information

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

Yang, Z. and Shen, S., “Monocular visual-inertial state estimation with online initialisation and camera-IMU extrinsic calibration,” IEEE Trans Automat Sci Eng 14(1), 3951 (2017).CrossRefGoogle Scholar
Mur-Artal, R. and Tardós, J. D., “Visual-inertial monocular SLAM with map reuse,” IEEE Robot Automat Lett 2(2), 796803 (2017).CrossRefGoogle Scholar
Jones, E. S. and Soatto, S., “Visual-inertial navigation, mapping and localisation: A scalable real-time causal approach,” Int J Robot Res 30(4), 407430 (2011).CrossRefGoogle Scholar
Shkurti, F., Rekleitis, I., Scaccia, M. and Dudek, G., “State Estimation of an Underwater Robot Using Visual and Inertial Information,” In: Proceedinngs IEEE/RSJ International Conference on Intelligent Robots and Systems, (2011) pp. 50545060.Google Scholar
Geneva, P., Eckenhoff, K., Lee, W., Yang, Y. and Huang, G., “Openvins: A Research Platform for Visual-Inertial Estimation,” In: International Conference on Robotics and Automation, (2020) pp. 46664672.Google Scholar
Martinelli, A., “Vision and IMU data fusion: Closed-form solutions for attitude, speed, absolute scale, and bias determination,” IEEE Trans Robot 28(1), 4460 (2012).CrossRefGoogle Scholar
Jung, J., Lee, S.-M and Myung, H., “Indoor mobile robot localization and mapping based on ambient magnetic fields and aiding radio sources,” IEEE Trans Instrum Meas 64(7), 19221934 (2015).CrossRefGoogle Scholar
Liu, J., Sun, L., Pu, J. and Yan, Y., “Hybrid cooperative localisation based on robot-sensor networks,” Singal Process 188, 108242 (2021).CrossRefGoogle Scholar
Muñoz-Salinas, R., Sarmadi, H., Cazzato, D. and Medina-Carnicer, R., “Flexible body scanning without template models,” Signal Process 154, 350362 (2019).CrossRefGoogle Scholar
Qin, T., Li, P. and Shen, S., “VINS-mono: A robust and versatile monocular visual-inertial state estimator,” IEEE Trans Robot 34(4), 10041020 (2018).CrossRefGoogle Scholar
Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R. and Furgale, P., “Keyframe-based visual-inertial odometry using nonlinear optimisation,” Int J Rob Res 34(3), 314334 (2015).CrossRefGoogle Scholar
Wei, H., Zhang, T. and Zhu, Y., “An iterative-optimization-based calibration framework for VIO with limited prior conditions,” IEEE Sens J 21(21), 2469424708 (2021).CrossRefGoogle Scholar
Ling, Y., Bao, L., Jie, Z., Zhu, F., Li, Z., Tang, S., Liu, Y., Liu, W. and Zhang, T., “Modeling Varying Camera-IMU Time Offset in Optimisation-Based Visual-Inertial Odometry,” In: Proc European Conference on Computer Vision, (2018) pp. 484500.Google Scholar
Zhang, Z., Dong, P., Wang, J. and Sun, Y., “Improving S-MSCKF with variational bayesian adaptive nonlinear filter,” IEEE Sens J 20(16), 94379448 (2020).CrossRefGoogle Scholar
Chang, L., Hu, B. and Li, K., “Iterated multiplicative extended Kalman filter for attitude estimation using vector observations,” IEEE Trans Aerosp Electron Syst 52(4), 20532060 (2016).CrossRefGoogle Scholar
Rehder, J., Siegwart, R. and Furgale, P., “A general approach to spatiotemporal calibration in multisensor systems,” IEEE Trans Robot 32(2), 383398 (2016).CrossRefGoogle Scholar
Mourikis, A. I. and Roumeliotis, S. I., “A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation,” In: Proceedings IEEE International Conference on Robotics and Automation, (2007) pp. 35653572.Google Scholar
Sun, K., Mohta, K., Pfrommer, B., Watterson, M., Liu, S., Mulgaonkar, Y., Taylor, C. J. and Kumar, V., “Robust stereo visual inertial odometry for fast autonomous flight,” IEEE Robot Autom Lett 3(2), 965972 (2018).CrossRefGoogle Scholar
Li, M. and Mourikis, A. I., “Online temporal calibration for camera-IMU systems: Theory and algorithms,” Int J Robot Res 33(7), 947964 (2014).CrossRefGoogle Scholar
Sabatelli, S., Galgani, M., Fanucci, L. and Rocchi, A., “A double-stage Kalman filter for orientation tracking with an integrated processor in 9-d imu,” IEEE Trans Instrum Meas 62(3), 590598 (2012).CrossRefGoogle Scholar
Qin, T. and Shen, S., “Online Temporal Calibration for Monocular Visual-Inertial System,” In: Proceedings IEEE/RSJ International Conference Intelligent Robots Systems, (2018) pp. 36623669.Google Scholar
Zhang, K., Li, X. R. and Zhu, Y., “Optimal update with out-of-sequence measurements,”IEEE,” IEEE Trans Signal Process 53(6), 19922004 (2005).CrossRefGoogle Scholar
Choi, M., Choi, J., Park, J. and Chung, W. K., “State Estimation with Delayed Measurements Considering Uncertainty of Time Delay,” In: IEEE International Conference on Robotics and Automation, (2009) pp. 39873992.Google Scholar
Furgale, P., Rehder, J. and Siegwart, R., “Unified Temporal And Spatial Calibration For Multi-Sensor Systems,” In: IEEE/RSJ International Conference on Intelligent Robots and Systems, (2013) pp. 12801286.Google Scholar
Kelly, J. and Sukhatme, G. S., “A General Framework for Temporal Calibration of Multiple Proprioceptive and Exteroceptive Sensors,” In: Experimental Robotics (Springer Tracts in Advanced Robotics), vol. 79 (Springer, Berlin, Germany, (2014) pp. 195209.Google Scholar
Liu, Y. and Meng, Z., “Online temporal calibration based on modified projection model for visual-inertial odometry,” IEEE Trans Instrum Meas 69(7), 51975207 (2020).CrossRefGoogle Scholar
Guo, C., Kottas, D., DuToit, R., Ahmed, A., Li, R. and Roumeliotis, S., “Efficient Visual-Inertial Navigation using a Rolling-Shutter Camera with Inaccurate Timestamps,” In: Proc. Robotics Science and Systems, (2014) pp. 19.Google Scholar
Baker, S. and Matthews, I., “Lucas-kanade 20 years on: A unifying framework,” Int J Comput Vis 56(3), 221255 (2004).CrossRefGoogle Scholar
Rosten, E. and Drummond, T., “Machine Learning for High-Speed Corner Detection,” In: European Conference on Computer Vision, (2006) pp. 430443.Google Scholar
Li, M. and Mourikis, A. I., “3-D Motion Estimation and Online Temporal Calibration for Camera-Imu Systems,” In: Proceedings IEEE International Conference on Robotics and Automation, (2013) pp. 57095716.Google Scholar
Forster, C., Carlone, L., Dellaert, F. and Scaranmuzza, D., “On-manifold pre-integration for real-time visual-inertial odometry,” IEEE Trans Robot 33(1), 121 (2017).CrossRefGoogle Scholar
Cen, R., Jiang, T., Tan, Y., Su, X. and Xue, F., “A low-cost visual inertial odometry for mobile vehicle based on double stage Kalman filter,” Signal Process 197, 108537 (2022).CrossRefGoogle Scholar
Burri, M., Nikolic, J., Gohl, P., Schneider, T., Rehder, J., Omari, S., Achtelik, M. W. and Siegwart, R., “The EuRoC micro aerial vehicle datasets,” Int J Robot Res 35(10), 11571163 (2016).CrossRefGoogle Scholar
Zhang, Z. and Scaramuzza, D., “A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry,” In: Proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems, (2018) pp. 72447251.Google Scholar