中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Semantic-Context Graph Network for Point-Based 3D Object Detection

文献类型:期刊论文

作者Dong, Shuwei6; Kong, Xiaoyu5; Pan, Xingjia4; Tang, Fan3; Li, Wei2; Chang, Yi6; Dong, Weiming1
刊名IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
出版日期2023-11-01
卷号33期号:11页码:6474-6486
ISSN号1051-8215
关键词3D object detection graph neural networks information entanglement
DOI10.1109/TCSVT.2023.3271318
通讯作者Tang, Fan(tangfan@ict.ac.cn) ; Chang, Yi(yichang@lu.edu.cn)
英文摘要Point-based indoor 3D object detection has received increasing attention with the large demand for augmented reality, autonomous driving, and robot technology in the industry. However, the detection precision suffers from inputs with semantic ambiguity, i.e., shape symmetries, occlusion, and texture missing, which would lead that different objects appearing similar from different viewpoints and then confusing the detection model. Typical point-based detectors relieve this problem via learning proposal representations with both geometric and semantic information, while the entangled representation may cause a reduction in both semantic and spatial discrimination. In this paper, we focus on alleviating the confusion from entanglement and then enhancing the proposal representation by considering the proposal's semantics and the context in one scene. A semantic-context graph network (SCGNet) is proposed, which mainly includes two modules: a category-aware proposal recoding module (CAPR) and a proposal context aggregation module (PCAg). To produce semantically clear features from entanglement representation, the CAPR module learns a high-level semantic embedding for each category to extract discriminative semantic clues. In view of further enhancing the proposal representation and leveraging the semantic clues, the PCAg module builds a graph to mine the most relevant context in the scene. With few bells and whistles, the SCGNet achieves SOTA performance and obtains consistent gains when applying to different backbones (0.9% similar to 2.4% on ScanNet V2 and 1.6% similar to 2.2% on SUN RGB-D for mAP@0.25). Code is available at https://github.com/dsw-jlurgzn/SCGNet.
WOS关键词CLOUDS
资助项目Beijing Natural Science Foundation[L221013] ; National Natural Science Foundation of China[62102162] ; National Natural Science Foundation of China[61832016] ; National Natural Science Foundation of China[62172126] ; National Natural Science Foundation of China[62106063] ; National Natural Science Foundation of China[61976102] ; National Natural Science Foundation of China[U20B2070] ; National Natural Science Foundation of China[U19A2065]
WOS研究方向Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:001093434100020
资助机构Beijing Natural Science Foundation ; National Natural Science Foundation of China
源URL[http://ir.ia.ac.cn/handle/173211/54436]  
专题多模态人工智能系统全国重点实验室
自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队
通讯作者Tang, Fan; Chang, Yi
作者单位1.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
2.Didiglobal, Beijing 100193, Peoples R China
3.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
4.Momenta, Beijing 215100, Peoples R China
5.Harbin Inst Technol Shenzhen, Sch Comp Sci & Technol, Shenzhen 518055, Guangdong, Peoples R China
6.Jilin Univ, Sch Artificial Intelligence, Changchun 130012, Peoples R China
推荐引用方式
GB/T 7714
Dong, Shuwei,Kong, Xiaoyu,Pan, Xingjia,et al. Semantic-Context Graph Network for Point-Based 3D Object Detection[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2023,33(11):6474-6486.
APA Dong, Shuwei.,Kong, Xiaoyu.,Pan, Xingjia.,Tang, Fan.,Li, Wei.,...&Dong, Weiming.(2023).Semantic-Context Graph Network for Point-Based 3D Object Detection.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,33(11),6474-6486.
MLA Dong, Shuwei,et al."Semantic-Context Graph Network for Point-Based 3D Object Detection".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 33.11(2023):6474-6486.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。