中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
PanoVLM: Low-Cost and accurate panoramic vision and LiDAR fused mapping

文献类型:期刊论文

作者Tu, Diantao1,2,3; Cui, Hainan1,2,3; Shen, Shuhan1
刊名ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING
出版日期2023-12-01
卷号206页码:149-167
ISSN号0924-2716
关键词Panoramic camera Line feature matching Camera-liDAR joint optimization Structure-from-Motion Multi-view stereo
DOI10.1016/j.isprsjprs.2023.11.012
通讯作者Cui, Hainan(hncui@nlpr.ia.ac.cn) ; Shen, Shuhan(shshen@nlpr.ia.ac.cn)
英文摘要Cameras and LiDARs are currently two types of sensors commonly used for 3D mapping. Vision-based methods are susceptible to textureless regions and lighting, and LiDAR-based methods easily degenerate in scenes with insignificant structural features. Most current fusion-based methods require strict synchronization between the camera and LiDAR and need auxiliary sensors, such as IMU. All of these lead to an increase in device cost and complexity. To address that, in this paper, we propose a low-cost mapping pipeline called PanoVLM that only requires a panoramic camera and a LiDAR without strict synchronization. First, camera poses are estimated by a LiDAR-assisted global Structure-from-Motion, and LiDAR poses are derived with the initial camera-LiDAR relative pose. Then, line-to-line and point-to-plane associations are established between LiDAR point clouds, which are used to further refine LiDAR poses and remove motion distortion. With the initial sensor poses, line-to-line correspondences are established between images and LiDARs to refine their poses jointly. The final step, joint panoramic Multi-View Stereo, estimates the depth map for each panoramic image and fuses them into a complete dense 3D map. Experimental results show that PanoVLM can work on various scenarios and outperforms state-of-the-art (SOTA) vision-based and LiDAR-based methods. Compared with the current SOTA LiDAR-based techniques, namely LOAM, LeGO-LOAM, and F-LOAM, PanoVLM manifests a reduction in the absolute rotation error and absolute translation error by 20% and 35%, respectively. Our code and dataset are available at https://github.com/3dv-casia/PanoVLM.
WOS关键词SLAM ; ROBUST
WOS研究方向Physical Geography ; Geology ; Remote Sensing ; Imaging Science & Photographic Technology
语种英语
出版者ELSEVIER
WOS记录号WOS:001114992000001
源URL[http://ir.ia.ac.cn/handle/173211/55057]  
专题中科院工业视觉智能装备工程实验室
通讯作者Cui, Hainan; Shen, Shuhan
作者单位1.Chinese Acad Sci, Inst Automat, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
3.Sensetime Res Grp, CASIA, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Tu, Diantao,Cui, Hainan,Shen, Shuhan. PanoVLM: Low-Cost and accurate panoramic vision and LiDAR fused mapping[J]. ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING,2023,206:149-167.
APA Tu, Diantao,Cui, Hainan,&Shen, Shuhan.(2023).PanoVLM: Low-Cost and accurate panoramic vision and LiDAR fused mapping.ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING,206,149-167.
MLA Tu, Diantao,et al."PanoVLM: Low-Cost and accurate panoramic vision and LiDAR fused mapping".ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING 206(2023):149-167.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。