Hostname: page-component-77f85d65b8-9nbrm Total loading time: 0 Render date: 2026-04-17T21:49:08.366Z Has data issue: false hasContentIssue false

Automated surface damage detection with an embodied intelligence robotic platform

Published online by Cambridge University Press:  22 January 2026

Kamil Altinay
Affiliation:
Department of Engineering, Durham University, Durham, UK Department of Civil Engineering, University of Birmingham, UK
Zehao Ye
Affiliation:
Department of Engineering, Durham University, Durham, UK Department of Civil Engineering, University of Birmingham, UK
Stergios-Aristoteles Mitoulis
Affiliation:
The Bartlett School of Sustainable Construction (BSSC), University College London, UK
Jelena Ninić*
Affiliation:
Department of Engineering, Durham University, Durham, UK
*
Corresponding author: Jelena Ninić; Email: jelena.ninic@durham.ac.uk

Abstract

Regular inspections of civil structures and infrastructure, performed by professional inspectors, are costly and demanding in terms of time and safety requirements. Additionally, the outcome of inspections can be subjective and inaccurate as they rely on the inspector’s expertise. To address these challenges, autonomous inspection systems offer a promising alternative. However, existing robotic inspection systems often lack adaptive positioning capabilities and integrated crack labelling, limiting detection accuracy and their contribution to long-term dataset improvement. This study introduces a fully autonomous framework that combines real-time crack detection with adaptive pose adjustment, automated recording and labelling of defects, and integration of RGB-D and LiDAR sensing for precise navigation. Damage detection is performed using YOLOv5, a widely used detection model, which analyzes the RGB image stream to detect cracks and generates labels for dataset creation. The robot autonomously adjusts its position based on confidence feedback from the detection algorithm, optimizing its vantage point for improved detection accuracy. Experiment inspections showed an average confidence gain of 18% (exceeding 20% for certain crack types), a reduction in size estimation error from 23.31% to 10.09%, and a decrease in the detection failure rate from 20% to 6.66%. While quantitative validation during field testing proved challenging due to dynamic environmental conditions, qualitative observations aligned with these trends, suggesting its potential to reduce manual intervention in inspections. Moreover, the system enables automated recording and labeling of detected cracks, contributing to the continuous improvement of machine learning models for structural health monitoring.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press
Figure 0

Figure 1. Flowchart illustrating the autonomous inspection framework for surface damage detection. The system integrates hardware components, including LiDAR, RGB-D cameras, and positioning sensors, with ROS nodes that process sensor data for mapping, navigation, and crack detection. Additionally, an Rviz (ROS visualization) node provides real-time visualization of mapping and navigation data. The main node coordinates robot control, utilizing DC motors for movement and a servo motor for camera adjustments. The YOLO node identifies cracks, while the mapping node updates the environment in real-time, ensuring precise localization and obstacle avoidance.

Figure 1

Figure 2. Physical components of the robot platform, including sensors (Camera, Lidar), motion mechanisms (Mecanum wheels), power supply (Battery), control modules (STM32 microcontroller), and peripheral interfaces (USB Hub).

Figure 2

Figure 3. Hardware integration of the robot platform, detailing physical connections and data communication pathways between components such as sensors, actuators, controllers, and power systems to illustrate their collaborative functionality.

Figure 3

Figure 4. Workflow of the autonomous movements during the inspection. The figure illustrates the sequential steps taken by the robot to inspect the structure autonomously (see https://youtu.be/Ohkx5kU3q_8).

Figure 4

Figure 5. Workflow of the autonomous inspection system using depth camera, LiDAR, and AI-based object detection for damage assessment, including position adjustment before recording damage.

Figure 5

Figure 6. Detailed methodology of the main node in the autonomous inspection system, including object detection, position adjustment, damage recording, and navigation control.

Figure 6

Figure 7. Flowchart illustrating the adaptive pose adjustment process for crack detection. The robot iteratively refines its position based on bounding box proximity to frame edges and confidence scores, ensuring optimal crack visibility before capturing detailed images.

Figure 7

Figure 8. Detailed architecture of the auxiliary nodes.

Figure 8

Figure 9. Overview of the YOLOv5-based crack detection pipeline with GPU-accelerated inference. The pipeline consists of three main stages: preprocessing, where images are resized, padded, and normalized; inference, performed on a GPU using a TRT-optimized YOLOv5 model; and post-processing, where the final detection results are extracted using non-maximum suppression (NMS).

Figure 9

Table 1. Performance evaluation of the trained model on different datasets

Figure 10

Figure 10. Process flow of the Gmapping algorithm. The algorithm integrates LiDAR scans and odometry data to estimate the robot’s pose and update the map.

Figure 11

Figure 11. Visualization of the robotic inspection system in RViz, showcasing the robot model, sensor data, and navigation path.

Figure 12

Figure 12. Verification experiment in a controlled indoor environment. The robot navigates a narrow corridor while detecting artificially introduced cracks on foam board walls. The experiment includes crack detection on both masonry and concrete surfaces to evaluate the robot’s performance.

Figure 13

Figure 13. Test on verification environment.

Figure 14

Table 2. Confidence rate improvement after pose adjustment for different types of cracks

Figure 15

Figure 14. Verification experiment for one side wall in a controlled indoor environment.

Figure 16

Figure 15. Experimental verification: comparing detected crack results without and with pose adjustment.

Figure 17

Table 3. Comparison of detected crack sizes with actual sizes

Figure 18

Figure 16. The robot conducting tunnel inspection, capturing real-time visual data, and detecting crack formations along the tunnel walls.

Figure 19

Figure 17. Comparison of crack detection performance without and with pose adjustment. Each pair of images shows the same scene: the left image presents the crack detection results without pose adjustment, while the right image shows improved detection after applying pose adjustment.

Figure 20

Figure 18. Original images (left), detection results without pose adjustment (middle), and detection results with pose adjustment (right).

Submit a response

Comments

No Comments have been published for this article.