中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Self-distilled Feature Aggregation for Self-supervised Monocular Depth Estimation

文献类型:会议论文

作者Zhou ZM(周正铭)2,3; Dong QL(董秋雷)1,2,3
出版日期2022-10
会议日期2022-10-23
会议地点Tel Aviv, Israel
英文摘要

Self-supervised monocular depth estimation has received much attention recently in computer vision. Most of the existing works in literature aggregate multi-scale features for depth prediction via either straightforward concatenation or element-wise addition, however, such feature aggregation operations generally neglect the contextual consistency between multi-scale features. Addressing this problem, we propose the Self-Distilled Feature Aggregation (SDFA) module for simultaneously aggregating a pair of low-scale and high-scale features and maintaining their contextual consistency. The SDFA employs three branches to learn three feature offset maps respectively: one offset map for refining the input low-scale feature and the other two for refining the input high-scale feature under a designed self-distillation manner. Then, we propose an SDFA-based network for self-supervised monocular depth estimation, and design a self-distilled training strategy to train the proposed network with the SDFA module. Experimental results on the KITTI dataset demonstrate that the proposed method outperforms the comparative state-of-the-art methods in most cases. The code is available at https://github.com/ZM-Zhou/SDFA-Net_pytorch.

会议录出版者Springer Science and Business Media Deutschland GmbH
源URL[http://ir.ia.ac.cn/handle/173211/51854]  
专题自动化研究所_模式识别国家重点实验室_机器人视觉团队
通讯作者Dong QL(董秋雷)
作者单位1.中国科学院脑科学与智能技术卓越创新中心
2.中国科学院大学
3.中国科学院自动化研究所
推荐引用方式
GB/T 7714
Zhou ZM,Dong QL. Self-distilled Feature Aggregation for Self-supervised Monocular Depth Estimation[C]. 见:. Tel Aviv, Israel. 2022-10-23.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。