No CrossRef data available.
Published online by Cambridge University Press: 11 August 2025
In this study, we introduce a real-time pose estimation for a class of mobile robots with rectangular body (e.g., the common automatic guided vehicles), by integrating odometry and RGB-D images. First, a lightweight object detection model is designed based on the visual information. Then, a pose estimation algorithm is proposed based on the depth value variations within the target region that exhibit specific patterns due to the robot’s three-dimensional geometry and the observation perspective (termed as “differentiated depth information”). To improve the robustness of object detection and pose estimation, a Kalman filter is further constructed by incorporating odometry data. Finally, a series of simulations and experiments are conducted to demonstrate the method’s effectiveness. Experiments show that the proposed algorithm can achieve a speed over 20 Frames Per Second (FPS) together with a good estimation accuracy on a mobile robot equipped with an Nvidia Jetson Nano Developer KIT.