中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Learning Cross-modality Interaction for Robust Depth Perception of Autonomous Driving

文献类型:期刊论文

作者Liang, Yunji1; Chen, Nengzhen1; Yu, Zhiwen1; Tang, Lei2; Yu, Hongkai3; Guo, Bin1; Zeng, Daniel Dajun4
刊名ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY
出版日期2024-06-01
卷号15期号:3页码:26
关键词Cascading interaction autonomous systems auxiliary task depth prediction depth completion
ISSN号2157-6904
DOI10.1145/3650039
通讯作者Liang, Yunji(liangyunji@nwpu.edu.cn)
英文摘要As one of the fundamental tasks of autonomous driving, depth perception aims to perceive physical objects in three dimensions and to judge their distances away from the ego vehicle. Although great efforts have been made for depth perception, LiDAR-based and camera-based solutions have limitations with low accuracy and poor robustness for noise input. With the integration of monocular cameras and LiDAR sensors in autonomous vehicles, in this article, we introduce a two-stream architecture to learn the modality interaction representation under the guidance of an image reconstruction task to compensate for the deficiencies of each modality in a parallel manner. Specifically, in the two-stream architecture, the multi-scale cross-modality interactions are preserved via a cascading interaction network under the guidance of the reconstruction task. Next, the shared representation of modality interaction is integrated to infer the dense depth map due to the complementarity and heterogeneity of the two modalities. We evaluated the proposed solution on the KITTI dataset and CALAR synthetic dataset. Our experimental results show that learning the coupled interaction of modalities under the guidance of an auxiliary task can lead to significant performance improvements. Furthermore, our approach is competitive against the state-of-the-art models and robust against the noisy input. The source code is available at https://github.com/tonyFengye/Code/tree/master.
WOS关键词NETWORK ; IMAGE
资助项目Natural Science Foundation of China[62372378] ; Natural Science Foundation of China[72225011]
WOS研究方向Computer Science
语种英语
WOS记录号WOS:001253862500010
出版者ASSOC COMPUTING MACHINERY
资助机构Natural Science Foundation of China
源URL[http://ir.ia.ac.cn/handle/173211/59187]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_互联网大数据与安全信息学研究中心
通讯作者Liang, Yunji
作者单位1.Northwestern Polytech Univ, Sch Comp Sci, 1 Dongxiang Rd, Xian 710129, Shaanxi, Peoples R China
2.Changan Univ, Sch Informat Engn, 126 Naner Huan Rd, Xian 710064, Shaanxi, Peoples R China
3.Cleveland State Univ, 2121 Euclid Ave, Cleveland, OH 4411 USA
4.Chinese Acad Sci, Inst Automat, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Liang, Yunji,Chen, Nengzhen,Yu, Zhiwen,et al. Learning Cross-modality Interaction for Robust Depth Perception of Autonomous Driving[J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY,2024,15(3):26.
APA Liang, Yunji.,Chen, Nengzhen.,Yu, Zhiwen.,Tang, Lei.,Yu, Hongkai.,...&Zeng, Daniel Dajun.(2024).Learning Cross-modality Interaction for Robust Depth Perception of Autonomous Driving.ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY,15(3),26.
MLA Liang, Yunji,et al."Learning Cross-modality Interaction for Robust Depth Perception of Autonomous Driving".ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY 15.3(2024):26.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。