Abstract
This paper presents a comprehensive framework for robust monocular and stereo depth estimation in domains where acquiring accurate ground-truth depth is prohibitively expensive or physically impossible. We address the core challenge of the simulation-to-reality (Sim2Real) domain gap by developing a mathematically rigorous pipeline that leverages high-fidelity, procedurally generated synthetic data for pre-training, coupled with novel domain-invariant representation learning. Our methodology integrates a synthetic data generation engine capable of producing limitless annotated samples for both virtual human topology and complex urban automotive scenes. We propose a multi-scale feature extraction network optimized for real-time inference and a coupled depth inference module that fuses geometric cues from stereo baselines and temporal structure-from-motion from small-motion video sequences. A key contribution is a differentiable domain adaptation module that minimizes the H-divergence between synthetic and real feature distributions during fine-tuning. Extensive experimental validation demonstrates significant quantitative improvements over state-of-the-art methods on benchmark datasets. In medical facial volumetrics, our model achieves a mean absolute relative error reduction of 41.2% compared to supervised baselines trained on limited real data. In autonomous navigation tasks, we report a 33.7% improvement in depth accuracy for dynamic obstacles at ranges up to 80 meters, while maintaining a real-time throughput of 45 frames per second. Our work substantiates that synthetic data, when coupled with principled domain adaptation, is not merely a substitute but a strategic asset for achieving superior robustness and precision in 3D vision systems.



![Author ORCID: We display the ORCID iD icon alongside authors names on our website to acknowledge that the ORCiD has been authenticated when entered by the user. To view the users ORCiD record click the icon. [opens in a new tab]](https://www.cambridge.org/engage/assets/public/coe/logo/orcid.png)