中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Keypoint Context Aggregation For Human Pose Estimation

文献类型:会议论文

作者Wenzhu, Wu1,2; Weining, Wang1,2; Longteng, Guo1,2; Jing, Liu1,2
出版日期2021-09
会议日期2021-12-26
会议地点中国海口
英文摘要

Human pose estimation has drawn much attention recently, but it remains challenging due to the deformation of human joints, the occlusion between limbs, etc. And more discriminative feature representations will bring more accurate prediction results. In this paper, we explore the importance of aggregating keypoint contextual information to strengthen the feature map representations in human pose estimation. Motivated by the fact that each keypoint is characterized by its relative contextual keypoints, we devise a simple yet effective approach, namely Keypoint Context Aggregation Module, that aggregates informative keypoint contexts for better keypoint localization. Specifically, first we obtain a rough localization result, which can be considered as soft keypoint areas. Based on these soft areas, keypoint contexts are purposefully aggregated for feature representation strengthening. Experiments show that the proposed Keypoint Context Aggregation Module can be used in various backbones to boost the performance and our best model achieves a state-of-the-art of 75.8% AP on MSCOCO test-dev split.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/48591]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
通讯作者Jing, Liu
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Wenzhu, Wu,Weining, Wang,Longteng, Guo,et al. Keypoint Context Aggregation For Human Pose Estimation[C]. 见:. 中国海口. 2021-12-26.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。