Hostname: page-component-6766d58669-bp2c4 Total loading time: 0 Render date: 2026-05-14T22:20:49.056Z Has data issue: false hasContentIssue false

An Internet of Things-enabled adaptive robotic framework for autonomous navigation in smart campus and urban mobility systems

Published online by Cambridge University Press:  14 November 2025

Indra Kishor
Affiliation:
Department of CSE, Poornima Institute of Engineering & Technology, Jaipur, Rajasthan, India
Udit Mamodiya
Affiliation:
Faculty of Engineering & Technology, Poornima University, Jaipur, Rajasthan, India
Sumit Saini*
Affiliation:
Department of Electrical Engineering, Central University of Haryana, Mahendergarh, Haryana, India
Mohammed Amin Almaiah
Affiliation:
Department of Computer Science, University of Jordan, Jordan
Vaibhav Gandhi
Affiliation:
Parul University, Vadodara, Gujarat, India
*
Corresponding author: Sumit Saini; Email: drsumiteed@cuh.ac.in
Rights & Permissions [Opens in a new window]

Abstract

This research proposes an Internet of Things (IoT)-enabled adaptive robotic navigation framework tailored for smart campuses and urban mobility systems. It aims to overcome critical limitations in existing systems that rely on static data, lack real-time adaptability, and perform poorly in dynamic or adverse environments. The proposed system uniquely integrates heterogeneous real-time data sources including traffic, obstacle, and weather captured from IoT sensors into a unified decision-making architecture. It combines a graph neural network for dynamic environmental modeling, a convolutional neural network for obstacle mapping, and a multilayer perceptron for weather-aware path assessment. A proximal policy optimization-based reinforcement learning (RL) controller then computes continuous control actions. A novel multi-objective reward function adaptively adjusts priorities between travel time, energy efficiency, collision risk, and terrain stability based on the current IoT context, enabling fine-grained, scenario-aware optimization. The system is deployed on resource-constrained edge hardware (Jetson Nano), proving its feasibility for real-time embedded applications. Simulations across diverse scenarios including urban traffic congestion, dynamic obstacle handling, and adverse weather demonstrate 95% navigation accuracy, 98% obstacle detection precision, and near-optimal route selection. The framework sustains real-time operation with 10 Hz decision throughput and sub-300 ms latency, outperforming traditional static and rule-based systems while sustaining over 92% performance consistency under adverse weather. This work introduces a first-of-its-kind modular framework that fuses IoT sensory data, adaptive RL control, and edge deployment for robust, efficient navigation. It establishes a scalable baseline for real-world autonomous mobility in smart city ecosystems.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Table I. Comparative summary of related works and research gaps.

Figure 1

Table II. Workflow of the proposed framework for AI-powered robotic navigation. This table summarizes the data used, processing techniques applied, and output of each step of the workflow, integrating IoT, deep learning, and robotic navigation for real-time management of urban traffic.

Figure 2

Figure 1. Workflow diagram of the proposed adaptive robotic navigation system integrating IoT sensing, Kalman-based data fusion, deep visual perception, traffic prediction, reinforcement learning-based decision-making, and optimal trajectory execution. The flowchart outlines the sequential operation of the system, key processing modules, embedded mathematical formulations, and real-time decision feedback loops used for safe and efficient autonomous mobility in dynamic environments.

Figure 3

Figure 2. Comprehensive visualization of the proposed optimal path selection framework. The collage includes (a) a real-time RL-based workflow diagram showing sensor fusion, reward computation, and trajectory generation; (b) a heatmap visualization of cumulative reward across different state-action pairs indicating policy learning convergence; (c) a system-level flowchart integrating perception, prediction, planning, and actuation modules; and (d) a reinforcement learning policy map visualizing the navigational decisions in structured environments. Together, these visualizations reflect the functional depth, mathematical rigor, and real-time applicability of the proposed navigation framework.

Figure 4

Figure 3. Simulation setup and implementation framework. This figure presents the structural flow of the proposed simulation framework, integrating hardware components, software tools, and test scenarios for the development and validation of the AI-powered robotic navigation system.

Figure 5

Figure 4. Urban traffic simulation. The system effectively identifies and selects the optimal route in light traffic conditions.

Figure 6

Figure 5. Obstacle handling highlights the system’s collision-free navigation path in the presence of static and dynamic obstacles.

Figure 7

Figure 6. Weather variations. The figure demonstrates the system’s ability to handle foggy conditions and maintain visibility for route optimization.

Figure 8

Figure 7. The complete simulation and implementation framework integrates hardware components, software tools, and testing environments.

Figure 9

Figure 8. Real-time deployment of the suggested robotic navigation framework. The figure indicates key milestones including hardware integration of the robot platform, sensor onboarding (LiDAR and cameras), reinforcement learning model training, and internal edge computing setup.

Figure 10

Figure 9. Reinforcement learning-based path planning visualization. (a) Reward convergence curve across training episodes. (b) PPO policy distribution shift showing decision refinement. (c) 3D cost landscape revealing optimal trajectory valleys for adaptive navigation.

Figure 11

Algorithm 1: Continuous linear arc transition mode based on the register,

Figure 12

Figure 10. Flowchart of IoT-enabled adaptive RL navigation algorithm.

Figure 13

Figure 11. Integrated interaction of CNN, GNN, PPO, and classical planners.

Figure 14

Table III. Navigation success rate across different test scenarios.

Figure 15

Figure 12. Navigation accuracy across scenarios and conditions. This plot displays the navigation performance of the suggested system for different scenarios. The left panel displays a heatmap of success rates in urban traffic, obstacle management, and weather change conditions with critical areas marked. The right panel shows a 3D surface plot of success rate comparison versus traffic density and obstacles. The outcomes show the fitness and the overall high accuracy of the model in dynamic and complex situations.

Figure 16

Table IV. Route cost comparison with optimal benchmarks.

Figure 17

Figure 13. Route optimality across scenarios.

Figure 18

Table V. Computational benchmarks of the proposed framework.

Figure 19

Figure 14. 3D path tracing and goal deviation visualization. The main plot illustrates the complete navigational trajectory of the robot from the starting point to the goal over a dynamic terrain surface, generated from real-time elevation and environmental complexity. The highlighted path represents the learned policy action sequence Ltotal. The inset shows a zoomed-in region around the goal with minor deviation analysis, reflecting precision in localization under noise and terrain distortion.

Figure 20

Figure 15. Obstacle detection and avoidance metrics.

Figure 21

Figure 16. Weather-adaptive navigation under distinct environmental conditions.

Figure 22

Table VI. Comparative navigation success rate under weather conditions.

Figure 23

Table VII. Comparative navigation performance under environmental variability.

Figure 24

Table VIII. Ablation performance comparison under varied conditions.