中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Efficient 6D object pose estimation based on attentive multi-scale contextual information

文献类型:期刊论文

作者Gao, Fang3; Sun, Qingyi3; Li, Shaodong3; Li, Wenbo2; Li, Yong3; Yu, Jun1; Shuang, Feng3
刊名IET COMPUTER VISION
出版日期2022-04-02
ISSN号1751-9632
DOI10.1049/cvi2.12101
通讯作者Li, Shaodong(lishaodongyx@126.com) ; Li, Yong(yongli@gxu.edu.cn)
英文摘要6D pose estimation has been pervasively applied to various robotic applications, such as service robots, collaborative robots, and unmanned warehouses. However, accurate 6D pose estimation is still a challenge problem due to the complexity of application scenarios caused by illumination changes, occlusion and even truncation between objects, and additional refinement is required for accurate 6D object pose estimation in prior work. Aiming at the efficiency and accuracy of 6D object pose estimation in these complex scenes, this paper presents a novel end-to-end network, which effectively utilises the contextual information within a neighbourhood region of each pixel to estimate the 6D object pose from RGB-D images. Specifically, our network first applies the attention mechanism to extract effective pixel-wise dense multimodal features, which are then expanded to multi-scale dense features by integrating pixel-wise features at different scales for pose estimation. The proposed method is evaluated extensively on the LineMOD and YCB-Video datasets, and the experimental results show that the proposed method is superior to several state-of-the-art baselines in terms of average point distance and average closest point distance.
资助项目National Natural Science Foundation of China[41871302] ; National Natural Science Foundation of China[61773359] ; National Natural Science Foundation of China[61720106009] ; Guangxi Key Laboratory of Manufacturing System & Advanced Manufacturing Technology[20-065-40S005] ; Guangxi Science and Technology base and Talent Project[2020AC19253] ; USTC Research Funds of the Double First-Class Initiative[YD2350002001] ; Anhui Provincial Natural Science Foundation[2108085J19] ; Anhui Province Key Research and Development Program[202104a05020007] ; CAAI-Huawei MindSpore Open Fund[CAAIXSJLJJ-2021-016B]
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:000777139300001
出版者WILEY
资助机构National Natural Science Foundation of China ; Guangxi Key Laboratory of Manufacturing System & Advanced Manufacturing Technology ; Guangxi Science and Technology base and Talent Project ; USTC Research Funds of the Double First-Class Initiative ; Anhui Provincial Natural Science Foundation ; Anhui Province Key Research and Development Program ; CAAI-Huawei MindSpore Open Fund
源URL[http://ir.hfcas.ac.cn:8080/handle/334002/128679]  
专题中国科学院合肥物质科学研究院
通讯作者Li, Shaodong; Li, Yong
作者单位1.Univ Sci & Technol China, Dept Automat, Hefei, Anhui, Peoples R China
2.Chinese Acad Sci, Inst Intelligent Machines, Hefei, Anhui, Peoples R China
3.Guangxi Univ, Guangxi Key Lab Intelligent Control & Maintenance, 100 Daxue East Rd, Nanning 530004, Peoples R China
推荐引用方式
GB/T 7714
Gao, Fang,Sun, Qingyi,Li, Shaodong,et al. Efficient 6D object pose estimation based on attentive multi-scale contextual information[J]. IET COMPUTER VISION,2022.
APA Gao, Fang.,Sun, Qingyi.,Li, Shaodong.,Li, Wenbo.,Li, Yong.,...&Shuang, Feng.(2022).Efficient 6D object pose estimation based on attentive multi-scale contextual information.IET COMPUTER VISION.
MLA Gao, Fang,et al."Efficient 6D object pose estimation based on attentive multi-scale contextual information".IET COMPUTER VISION (2022).

入库方式: OAI收割

来源:合肥物质科学研究院

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。