Hostname: page-component-77f85d65b8-2tv5m Total loading time: 0 Render date: 2026-04-19T01:42:55.117Z Has data issue: false hasContentIssue false

A robust visual simultaneous localization and mapping system for dynamic environments without predefined dynamic labels and weighted features

Published online by Cambridge University Press:  17 September 2025

Shuai Xiang
Affiliation:
College of Electric Power, Inner Mongolia University of Technology, Hohhot, 010080, China Intelligent Energy Technology and Equipment Engineering Research Centre of Colleges, Universities in Inner Mongolia Autonomous Region, Hohhot, 010051, China
Chaoyi Dong*
Affiliation:
College of Electric Power, Inner Mongolia University of Technology, Hohhot, 010080, China Intelligent Energy Technology and Equipment Engineering Research Centre of Colleges, Universities in Inner Mongolia Autonomous Region, Hohhot, 010051, China Engineering Research Center of Large Energy Storage Technology, Ministry of Education, Hohhot, 010010, China
Kang Zhang
Affiliation:
College of Electric Power, Inner Mongolia University of Technology, Hohhot, 010080, China Intelligent Energy Technology and Equipment Engineering Research Centre of Colleges, Universities in Inner Mongolia Autonomous Region, Hohhot, 010051, China
Ge Tai
Affiliation:
College of Electric Power, Inner Mongolia University of Technology, Hohhot, 010080, China Intelligent Energy Technology and Equipment Engineering Research Centre of Colleges, Universities in Inner Mongolia Autonomous Region, Hohhot, 010051, China
Tianyu Yuan
Affiliation:
College of Electric Power, Inner Mongolia University of Technology, Hohhot, 010080, China Intelligent Energy Technology and Equipment Engineering Research Centre of Colleges, Universities in Inner Mongolia Autonomous Region, Hohhot, 010051, China
Haoda Yan
Affiliation:
College of Electric Power, Inner Mongolia University of Technology, Hohhot, 010080, China Intelligent Energy Technology and Equipment Engineering Research Centre of Colleges, Universities in Inner Mongolia Autonomous Region, Hohhot, 010051, China
Xiaoyan Chen
Affiliation:
College of Electric Power, Inner Mongolia University of Technology, Hohhot, 010080, China Intelligent Energy Technology and Equipment Engineering Research Centre of Colleges, Universities in Inner Mongolia Autonomous Region, Hohhot, 010051, China Engineering Research Center of Large Energy Storage Technology, Ministry of Education, Hohhot, 010010, China
*
Corresponding author: Chaoyi Dong; Email: dongchaoyi@imut.edu.cn

Abstract

Visual Simultaneous Localization and Mapping (vSLAM) is essentially limited by the static world assumption, which makes its application in dynamic environments challenging. This paper proposes a robust vSLAM system, RFN-SLAM, which is based on ORB-SLAM3 and does not require preset dynamic labels and weighted features to process dynamic scenes. In the feature extraction stage, an enhanced efficient binary image BAD descriptor is used to improve the accuracy of static feature point matching. Through the improved RT-DETR target detection network and FAST-SAM instance segmentation network, RFN-SLAM obtains semantic information and uses a novel dynamic box detection algorithm to identify and eliminate the feature points of dynamic objects. When optimizing the pose, the static feature points are weighted according to the dynamic information, which significantly reduces the mismatch and improves the accuracy of positioning. Meanwhile, 3D rendering of the neural radiation field is used to remove dynamic objects and render them. Experiments were conducted on the TUM RGB-D dataset, Bonn dataset, and self-collected dataset. The results show that in terms of positioning accuracy, RFN-SLAM significantly outperforms ORB-SLAM3 in dynamic environments. It also achieves more accurate positioning than other advanced dynamic SLAM methods and successfully realizes accurate 3D reconstruction of static scenes. In addition, on the premise of ensuring accuracy, the real-time performance of RFN-SLAM is effectively guaranteed.

Information

Type
Research Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable