Hostname: page-component-54dcc4c588-tfzs5 Total loading time: 0 Render date: 2025-10-02T08:04:06.561Z Has data issue: false hasContentIssue false

A real-time LiDAR-based terrain traversability classification approach in off-road environments

Published online by Cambridge University Press:  30 September 2025

Siyao Wu
Affiliation:
Institute of Robotics and Automantic Informantion System, College of Artifical Intelligence, Nankai University, Tianjin, China Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China
Shiyong Zhang*
Affiliation:
Institute of Robotics and Automantic Informantion System, College of Artifical Intelligence, Nankai University, Tianjin, China Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China
Runhua Wang
Affiliation:
Institute of Robotics and Automantic Informantion System, College of Artifical Intelligence, Nankai University, Tianjin, China Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China
Xijun Zhao
Affiliation:
China North Artificial Intelligence and Innovation Research Institute, Beijing, China Collective Intelligence and Collaboration Laboratory (CIC), Beijing, China
Zhenshuo Liang
Affiliation:
China North Artificial Intelligence and Innovation Research Institute, Beijing, China Collective Intelligence and Collaboration Laboratory (CIC), Beijing, China
Xuebo Zhang
Affiliation:
Institute of Robotics and Automantic Informantion System, College of Artifical Intelligence, Nankai University, Tianjin, China Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China
*
Corresponding author: Shiyong Zhang; Email: zhangshiyong@nankai.edu.cn

Abstract

Terrain traversability analysis is essential for realizing autonomous navigation. This paper proposes a real-time light detection and ranging (LiDAR)-based network for terrain traversability classification in off-road environments. This network incorporates a fast BEV (Bird’s Eye View) feature map generation module, which performs dynamic voxelization, pillar feature encoding and scatter on point cloud, and a traversability completion module that generates accurate and dense BEV traversability maps. The network is trained with dense ground truth labels generated through offline data processing, enabling accurate and dense traversability classification of the surrounding terrain centered on the ego vehicle, with an inference speed reaching 110 + FPS. Finally, we conduct qualitative and quantitative experiments on the RELLIS-3D off-road dataset and SemanticKITTI on-road dataset, which demonstrate the efficiency and accuracy of the proposed approach.

Information

Type
Research Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable

References

Waibel, G. G., Löw, T., Nass, M., Howard, D., Bandyopadhyay, T. and Borges, P. V. K., “How rough is the path? Terrain traversability estimation for local and global path planning,” IEEE Trans. Intell. Transp. Syst. 23(9), 1646216473 (2022).10.1109/TITS.2022.3150328CrossRefGoogle Scholar
Guan, T., Kothandaraman, D., Chandra, R., Sathyamoorthy, A. J., Weerakoon, K. and Manocha, D., “GA-Nav: Efficient terrain segmentation for robot navigation in unstructured outdoor environments,” IEEE Robot. Autom. Lett. 7(3), 81388145 (2022).10.1109/LRA.2022.3187278CrossRefGoogle Scholar
Alamiyan-Harandi, F., Derhami, V. and Jamshidi, F., “Combination of recurrent neural network and deep learning for robot navigation task in off-road environment,” Robotica 38(8), 14501462 (2020).10.1017/S0263574719001565CrossRefGoogle Scholar
Jian, Z., Lu, Z., Zhou, X., Lan, B., Xiao, A., Wang, X. and Liang, B., “PUTN: A Plane-fitting Based Uneven Terrain Navigation Framework,” In: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyoto, Japan (2022) pp. 71607166.Google Scholar
Bi, Q., Zhang, X., Zhang, S., Pan, Z. and Wang, R., “TMPU: A Framework for Terrain Traversability Mapping and Planning in Uneven and Unstructured Environments,” In: 2023 42nd Chinese Control Conference, Tianjin, China (2023) pp. 0107.Google Scholar
Liu, J., Chen, X., Xiao, J., Lin, S., Zheng, Z. and Lu, H., “Hybrid Map-based Path Planning for Robot Navigation in Unstructured Environments,” In: IEEE/RSJ International Conference on Intelligent Robots and Systems, Detroit, MI, IEEE (2023) pp. 22162223.Google Scholar
Wang, R., Wang, K., Song, W. and Fu, M., “Aerial-ground collaborative continuous risk mapping for autonomous driving of unmanned ground vehicle in off-road environments,” IEEE Trans. Aerosp. Electron. Syst. 59(6), 90269041 (2023).10.1109/TAES.2023.3312627CrossRefGoogle Scholar
Zhang, S., Zhang, X., Li, T., Yuan, J. and Fang, Y., “Fast active aerial exploration for traversable path finding of ground robots in unknown environments,” IEEE Trans. Instrum. Meas. 71, 113 (2022), Art no. 7502213.Google Scholar
Zhou, B., Yi, J., Zhang, X., Chen, L., Yang, D., Han, F. and Zhang, H., “An autonomous navigation approach for unmanned vehicle in outdoor unstructured terrain with dynamic and negative obstacles,” Robotica 40(8), 28312854 (2022).10.1017/S0263574721001983CrossRefGoogle Scholar
Guan, T., He, Z., Song, R. and Zhang, L., “TNES: Terrain traversability mapping, navigation and excavation system for autonomous excavators on worksite,” Auton. Robots 47, 695714 (2023).10.1007/s10514-023-10113-9CrossRefGoogle Scholar
Kim, O., Seo, J., Ahn, S. and Kim, C. H., “UFO: Uncertainty-aware LiDAR-image Fusion for Off-road Semantic Terrain Map Estimation,” In: 2024 IEEE Intelligent Vehicles Symposium, Jeju Island, Korea, IEEE (2024) pp. 192–199.Google Scholar
Ronneberger, O., Fischer, P. and Brox, T., “U-Net: Convolutional Networks for Biomedical Image Segmentation,” In: Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, Springer International Publishing (2015) pp. 234--241.Google Scholar
Shaban, A., Meng, X., Lee, J., Boots, B. and Fox, D., “Semantic Terrain Classification for Off-road Autonomous Driving,” In: Conference on Robot Learning, London, UK, PMLR (2021) pp. 619--629.Google Scholar
Zhou, Y. and Tuzel, O., “VoxelNet: End-to-end Learning for Point Cloud Based 3d Object Detection,” In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, IEEE (2018) pp. 44904499.Google Scholar
Yan, Y., Mao, Y. and Li, B., “SECOND: Sparsely embedded convolutional detection,” Sensors 18(10), 3337 (2018).CrossRefGoogle ScholarPubMed
Lang, A. H., Vora, S., Caesar, H., Zhou, L., Yang, J. and Beijbom, O., “PointPillars: Fast Encoders for Object Detection From Point Clouds,” In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, IEEE (2019) pp. 1268912697.Google Scholar
Qi, C., Su, H., Mo, K. and Guibas, L. J., “PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation,” In: IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, IEEE (2017) pp. 7785.Google Scholar
Shelhamer, E., Long, J. and Darrell, T., “Fully Convolutional Networks for Semantic Segmentation,” In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, IEEE (2017) pp. 640–651.Google Scholar
Chen, L., Zhu, Y., Papandreou, G., Schroff, F. and Adam, H., “Encoder-decoder With Atrous Separable Convolution for Semantic Image Segmentation,” In: European Conference on Computer Vision, Springer International Publishing, Switzerland (2018) pp. 833--851.Google Scholar
Howard, A. G., Sandler, M., Chu, G., Chen, L., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., Le, Q. V. and Adam, H., “Searching for MobileNetV3,” In: IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), IEEE (2019) pp. 13141324.Google Scholar
Yu, C., Wang, J., Peng, C., Gao, C., Yu, G. and Sang, N., “BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation,” In: Proceedings of the European conference on computer vision, Munich, Germany, Springer (2018) pp. 334349.Google Scholar
Peng, J., Liu, Y., Tang, S., Hao, Y., Chu, L., Chen, G., Wu, Z., Chen, Z., Yu, Z., Du, Y., Dang, Q., Lai, B., Liu, Q., Hu, X., Yu, D. and Ma, Y., “PP-liteSeg: A superior real-time semantic segmentation model, arXiv preprint arXiv:2204.02681 (2022).Google Scholar
Fan, M., Lai, S., Huang, J., Wei, X., Chai, Z., Luo, J. and Wei, X., “Rethinking Bisenet for Real-time Semantic Segmentation,” In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, IEEE (2021) pp. 97119720.Google Scholar
Min, C., Jiang, W., Zhao, D., Xu, J., Xiao, L., Nie, Y. and Dai, B., “ORFD: A Dataset and Benchmark for Off-road Freespace Detection,” In: International Conference on Robotics and Automation, Philadelphia, PA, IEEE (2022) pp. 25322538.Google Scholar
Lim, H., Oh, M. and Myung, H., “Patchwork: Concentric zone-based region-wise ground segmentation with ground likelihood estimation using a 3D LiDAR sensor,” IEEE Robot. Autom. Lett. 6(4), 64586465 (2021).10.1109/LRA.2021.3093009CrossRefGoogle Scholar
Shan, T., Wang, J., Englot, B. and Doherty, K. A., “Bayesian Generalized Kernel Inference for Terrain Traversability Mapping,” In: Conference on Robot Learning, Zürich, Switzerland, PMLR (2018) pp. 829--838.Google Scholar
Doherty, K., Shan, T., Wang, J. and Englot, B., “Learning-aided 3-d occupancy mapping with Bayesian generalized Kernel inference,” IEEE Trans. Robot. 35(4), 953966 (2019).10.1109/TRO.2019.2912487CrossRefGoogle Scholar
Li, Q., Wang, Y., Wang, Y. and Zhao, H., “HDMapNet: An Online HD Map Construction and Evaluation Framework,” In: International Conference on Robotics and Automation, Philadelphia, PA, IEEE (2022) pp. 46284634.Google Scholar
Li, Z., Wang, W., Li, H., Xie, E., Sima, C., Lu, T., Yu, Q. and Dai, J., “BEVFormer: Learning Bird’s-eye-view Representation From Multi-camera Images Via Spatiotemporal Transformers,” In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 47, IEEE (2024) pp. 2020–2036.Google Scholar
Liu, Z., Tang, H., Amini, A., Yang, X., Mao, H., Rus, D. and Han, S., “BEVFusion: Multi-task Multi-sensor Fusion With Unified Bird’s-eye View Representation,” In: 2023 IEEE international conference on robotics and automation, IEEE (2023) pp. 27742781.Google Scholar
Zhou, Y., Sun, P., Zhang, Y., Anguelov, D., Gao, J., Ouyang, T. Y., Guo, J., Ngiam, J. and Vasudevan, V., “End-to-end Multi-view Fusion for 3D Object Detection in LiDAR Point Clouds,” In: Conference on Robot Learning, Osaka, Japan, PMLR (2020) pp. 923–932.Google Scholar
Jiang, P., Osteen, P., Wigness, M. and Saripalli, S., “RELLIS-3D Dataset: Data, Benchmarks and Analysis,” In: IEEE International Conference on Robotics and Automation, Xián, China, IEEE (2021) pp. 11101116.Google Scholar
Chen, Y., Wei, P., Liu, Z., Wang, B., Yang, J. and Liu, W., “FASTC: A Fast Attentional Framework for Semantic Traversability Classification Using Point Cloud,” In: European Conference on Artificial Intelligence, Kraków, Poland, IOS Press (2023) pp. 429436).CrossRefGoogle Scholar
Chao, P., Kao, C. Y., Ruan, Y., Huang, C. H. and Lin, Y. L., “HarDNet: A Low Memory Traffic Network,” In: IEEE/CVF International Conference on Computer Vision, Seoul, Korea (South), IEEE (2019) pp. 35513560.Google Scholar
Gao, R., “Rethinking Dilated Convolution for Real-time Semantic Segmentation,” In: IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Vancouver, BC, Canada, IEEE (2023), pp. 46754684.Google Scholar
Supplementary material: File

Wu et al. supplementary material

Wu et al. supplementary material
Download Wu et al. supplementary material(File)
File 53.8 MB