|
作者 | Fan L(范略)1,2,3,6 ; Wang F(王峰)5 ; Wang NY(王乃岩)5; Zhang ZX(张兆翔)1,2,3,4
|
出版日期 | 2022-11
|
会议日期 | 2022/11/28-2022/12/9
|
会议地点 | 新奥尔良
|
关键词 | 点云目标检测
自动驾驶
|
英文摘要 | As the perception range of LiDAR increases, LiDAR-based 3D object detection
becomes a dominant task in the long-range perception task of autonomous driving. The mainstream 3D object detectors usually build dense feature maps in the network backbone and prediction head. However, the computational and spatial costs on the dense feature map are quadratic to the perception range, which makes them hardly scale up to the long-range setting. To enable efficient long-range LiDAR-based object detection, we build a fully sparse 3D object detector (FSD). The computational and spatial cost of FSD is roughly linear to the number of points and independent of the perception range. FSD is built upon the general sparse voxel encoder and a novel sparse instance recognition (SIR) module. SIR resolves the issue of center feature missing, which hinders the design of the fully sparse architecture. Moreover, SIR avoids the time-consuming neighbor queries in previous point-based methods. We conduct extensive experiments on the large-scale Waymo Open Dataset to reveal the inner workings, and state-of-the-art performance is reported. To demonstrate the superiority of FSD in long-range detection, we also conduct experiments on Argoverse 2 Dataset, which has a much larger perception range (200m) than Waymo Open Dataset (75m). On such a large perception range, FSD achieves state-of-the-art performance and is 2.4× faster than the dense counterpart. Our code is released at https://github.com/TuSimple/SST. |
语种 | 英语
|
URL标识 | 查看原文
|
源URL | [http://ir.ia.ac.cn/handle/173211/57419]  |
专题 | 自动化研究所_智能感知与计算研究中心
|
通讯作者 | Zhang ZX(张兆翔) |
作者单位 | 1.模式识别国家重点实验室 2.中国科学院大学 3.中国科学院自动化所 4.中国科学院香港创新研究院,人工智能与机器人研究中心 5.图森未来 6.中国科学院大学未来技术学院
|
推荐引用方式 GB/T 7714 |
Fan L,Wang F,Wang NY,et al. Fully Sparse 3D Object Detection[C]. 见:. 新奥尔良. 2022/11/28-2022/12/9.
|