中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
IterDepth: Iterative Residual Refinement for Outdoor Self-Supervised Multi-Frame Monocular Depth Estimation

文献类型:期刊论文

作者Feng, Cheng5; Chen, Zhen4,5; Zhang, Congxuan3,4; Hu, Weiming2; Li, Bing1; Lu, Feng1
刊名IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
出版日期2024
卷号34期号:1页码:329-341
关键词Estimation Iterative methods Cameras Task analysis Feature extraction Decoding Training Monocular depth estimation iterative refinement self-supervised learning deep learning
ISSN号1051-8215
DOI10.1109/TCSVT.2023.3284479
通讯作者Chen, Zhen(dr_chenzhen@163.com) ; Zhang, Congxuan(zcxdsg@163.com)
英文摘要Self-supervised monocular depth estimation has been a challenging task in computer vision for a long time, and it relies on only monocular or stereo video for its supervision. To address the challenge, we propose a novel multi-frame monocular depth estimation method called IterDepth, which is based on an iterative residual refinement network. IterDepth extracts depth features from consecutive frames and computes a 3D cost volume measuring the difference between current and previous features transformed by PoseCNN (pose estimation convolutional neural network). We reformulate depth prediction as a residual learning problem, revamping the dominating depth regression to enable high-accuracy multi-frame monocular depth estimation. Specifically, we design a gated recurrent depth fusion unit that seamlessly blends depth features from the cost volume, image features, and the depth prediction. The unit updates the hidden states and refines the depth map through iterative refinement, achieving more accurate predictions than existing methods. Our experiments on the KITTI dataset demonstrate that IterDepth is 7 x faster in terms of FPS (frames per second) than the recent state-of-the-art DepthFormer model with competitive performance. We also test IterDepth on the Cityscapes dataset to showcase its generalization capability in other real-world environments. Moreover, IterDepth can balance accuracy and computational efficiency by adjusting the number of refinement iterations and performs competitively with other CNN-based monocular depth estimation approaches. Source code is available at https://github.com/PCwenyue/IterDepth-TCSVT.
资助项目National Key Research and Development Program of China
WOS研究方向Engineering
语种英语
WOS记录号WOS:001138814400027
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构National Key Research and Development Program of China
源URL[http://ir.ia.ac.cn/handle/173211/55564]  
专题多模态人工智能系统全国重点实验室
通讯作者Chen, Zhen; Zhang, Congxuan
作者单位1.Nanchang Hangkong Univ, Sch Measuring & Opt Engn, Nanchang 330063, Peoples R China
2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
3.Nanchang Hangkong Univ, Sch Measuring & Opt Engn, Minist Educ, Nanchang 330063, Peoples R China
4.Nanchang Hangkong Univ, Key Lab Nondestruct Testing, Minist Educ, Nanchang 330063, Peoples R China
5.Beihang Univ, Sch Instrumentat & Optoelect Engn, Beijing 100191, Peoples R China
推荐引用方式
GB/T 7714
Feng, Cheng,Chen, Zhen,Zhang, Congxuan,et al. IterDepth: Iterative Residual Refinement for Outdoor Self-Supervised Multi-Frame Monocular Depth Estimation[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2024,34(1):329-341.
APA Feng, Cheng,Chen, Zhen,Zhang, Congxuan,Hu, Weiming,Li, Bing,&Lu, Feng.(2024).IterDepth: Iterative Residual Refinement for Outdoor Self-Supervised Multi-Frame Monocular Depth Estimation.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,34(1),329-341.
MLA Feng, Cheng,et al."IterDepth: Iterative Residual Refinement for Outdoor Self-Supervised Multi-Frame Monocular Depth Estimation".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34.1(2024):329-341.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。