Hostname: page-component-89b8bd64d-x2lbr Total loading time: 0 Render date: 2026-05-09T05:49:15.503Z Has data issue: false hasContentIssue false

An approach towards mobile robot recovery due to vision sensor failure in vSLAM systems using ROS

Published online by Cambridge University Press:  08 January 2025

Chibaye Mulubika
Affiliation:
Department of Mechanical and Mechatronic Engineering, Stellenbosch University, Western Cape, South Africa
Kristiaan Schreve*
Affiliation:
Department of Mechanical and Mechatronic Engineering, Stellenbosch University, Western Cape, South Africa
*
Corresponding author: Kristiaan Schreve; Email: kschreve@sun.ac.za
Rights & Permissions [Opens in a new window]

Abstract

This paper proposes a mobile robot recovery mechanism for low-cost robotic systems due to vision sensor failure in vSLAM systems. The approach takes advantage of ROS architecture and adopts the Shannon Nyquist sampling theory to selectively sample path parameters that will be used for back travel in case of vision sensor failure. As opposed to point clouds normally used to store vSLAM data, this paper proposes to store and use lightweight variables namely distance between sampled points, velocity combinations, i.e., linear and angular velocity, sampled period, and yaw angle values to describe the robot path and reduce the memory space required to store these variables. In this study, low-cost robotic systems typically using cameras aided by proprioceptive sensors such as IMU during vSLAM activities are investigated. A demonstration is made on how the ROS architecture can be used in a scenario where vision sensing is adversely affected, resulting in mapping failure. Additionally, a recommendation is made for adoption of the approach for vSLAM platforms implemented on both ROS1 and ROS2. Furthermore, a proposal is made to add an additional layer to vSLAM systems that will be exclusively used for back travel in case of vision loss during vSLAM activities resulting in mapping failure.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Figure 1. Illustration of frames captured during vSLAM activities with time stamps, translations, and rotations.

Figure 1

Table I. Brief comparison between ROS1 and ROS2 architectures needed for blind navigation.

Figure 2

Figure 2. An illustration of publisher and subscriber node implementation on ROS.

Figure 3

Figure 3. Path representations for blind navigation.

Figure 4

Figure 4. Path landmarks data collection using publisher-subscriber and ROS services during vSLAM activities.

Figure 5

Figure 5. Architecture for motion control during blind navigation.

Figure 6

Figure 6. Steps to prepare path landmarks for sending to the action server.

Figure 7

Figure 7. Sequence for blind navigation during and after loss of visual capabilities in vSLAM.

Figure 8

Figure 8. VSLAM system with proposed blind navigation layer added.

Figure 9

Figure 9. Definitions for action (A), service (B), and message (C).

Figure 10

Table II. Comparison between traditional navigation method and new approach.

Figure 11

Figure 10. Equipment for experimentation.

Figure 12

Table III. Test results for sampled straight line segments.

Figure 13

Table IV. Position error for back travel using different velocities.

Figure 14

Figure 11. Relationship between angular velocity and yaw angle changes for robot during back travel.

Figure 15

Figure 12. Robot back travel in straight line segment without obstacles$(^{\_ \_ \_ \_ \_ \_ \_ })$ ground truth, and (- - -) back travel.

Figure 16

Figure 13. Robot moving in a curved segment with obstacles.

Figure 17

Table V. Test results for a curved segment with obstacles.

Figure 18

Table VI. Test results of a new layer added to existing vSLAM.