Learning Depth-aware Heatmaps for 3D Human Pose Estimation in the Wild
文献类型:会议论文
作者 | Chen, Zerui1,4![]() ![]() ![]() |
出版日期 | 2019-08 |
会议日期 | 2019.9.9-2019.9.12 |
会议地点 | Cardiff, UK |
英文摘要 | In this paper, we explore to determine 3D human pose directly from monocular image data. While current state-of-the-art approaches employ the volumetric representation to predict per voxel likelihood for each human joint, the network output is memoryintensive, making it hard to function on mobile devices. To reduce the output dimension, we intend to decompose the volumetric representation into 2D depth-aware heatmaps and joint depth estimation. We propose to learn depth-aware 2D heatmaps via associative embeddings to reconstruct the connection between the 2D joint location and its corresponding depth. Our approach achieves a good trade-off between complexity and high performance. We conduct extensive experiments on the popular benchmark Human3.6M and advance the state-of-the-art accuracy for 3D human pose estimation in the wild. |
源URL | [http://ir.ia.ac.cn/handle/173211/44427] ![]() |
专题 | 自动化研究所_智能感知与计算研究中心 |
通讯作者 | Chen, Zerui |
作者单位 | 1.Center for Research on Intelligent Perception and Computing (CRIPAC), National Laboratory of Pattern Recognition (NLPR) 2.Center for Excellence in Brain Science and Intelligence Technology (CEBSIT), Institute of Automation, Chinese Academy of Sciences (CASIA) 3.School of Astronautics, Beihang University 4.Chinese Academy of Sciences Artificial Intelligence Research (CAS-AIR) 5.University of Chinese Academy of Sciences (UCAS) |
推荐引用方式 GB/T 7714 | Chen, Zerui,Guo, Yiru,Huang, Yan,et al. Learning Depth-aware Heatmaps for 3D Human Pose Estimation in the Wild[C]. 见:. Cardiff, UK. 2019.9.9-2019.9.12. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。