On Robust Cross-view Consistency in Self-supervised Monocular Depth Estimation
文献类型:期刊论文
作者 | Haimei Zhao1; Jing Zhang1; Zhuo Chen3![]() |
刊名 | Machine Intelligence Research
![]() |
出版日期 | 2024 |
卷号 | 21期号:3页码:495-513 |
关键词 | 3D vision, depth estimation, cross-view consistency, self-supervised learning, monocular perception |
ISSN号 | 2731-538X |
DOI | 10.1007/s11633-023-1474-0 |
英文摘要 | Remarkable progress has been made in self-supervised monocular depth estimation (SS-MDE) by exploring cross-view consistency, e.g., photometric consistency and 3D point cloud consistency. However, they are very vulnerable to illumination variance, occlusions, texture-less regions, as well as moving objects, making them not robust enough to deal with various scenes. To address this challenge, we study two kinds of robust cross-view consistency in this paper. Firstly, the spatial offset field between adjacent frames is obtained by reconstructing the reference frame from its neighbors via deformable alignment, which is used to align the temporal depth features via a depth feature alignment (DFA) loss. Secondly, the 3D point clouds of each reference frame and its nearby frames are calculated and transformed into voxel space, where the point density in each voxel is calculated and aligned via a voxel density alignment (VDA) loss. In this way, we exploit the temporal coherence in both depth feature space and 3D voxel space for SS-MDE, shifting the “point-to-point” alignment paradigm to the “region-to-region” one. Compared with the photometric consistency loss as well as the rigid point cloud alignment loss, the proposed DFA and VDA losses are more robust owing to the strong representation power of deep features as well as the high tolerance of voxel density to the aforementioned challenges. Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques. Extensive ablation study and analysis validate the effectiveness of the proposed losses, especially in challenging scenes. The code and models are available at https://github.com/sunnyHelen/RCVC-depth. |
源URL | [http://ir.ia.ac.cn/handle/173211/56478] ![]() |
专题 | 自动化研究所_学术期刊_International Journal of Automation and Computing |
作者单位 | 1.School of Computer Science, University of Sydney, Sydney 2008, Australia 2.School of Information Technology & Electrical Engineering, University of Queensland, Brisbane 4072, Australia 3.Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China |
推荐引用方式 GB/T 7714 | Haimei Zhao,Jing Zhang,Zhuo Chen,et al. On Robust Cross-view Consistency in Self-supervised Monocular Depth Estimation[J]. Machine Intelligence Research,2024,21(3):495-513. |
APA | Haimei Zhao,Jing Zhang,Zhuo Chen,Bo Yuan,&Dacheng Tao.(2024).On Robust Cross-view Consistency in Self-supervised Monocular Depth Estimation.Machine Intelligence Research,21(3),495-513. |
MLA | Haimei Zhao,et al."On Robust Cross-view Consistency in Self-supervised Monocular Depth Estimation".Machine Intelligence Research 21.3(2024):495-513. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。