Richly Activated Graph Convolutional Network for Robust Skeleton-Based Action Recognition
文献类型:期刊论文
作者 | Song, Yi-Fan2![]() ![]() ![]() |
刊名 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
![]() |
出版日期 | 2021-05-01 |
卷号 | 31期号:5页码:1915-1925 |
关键词 | Skeleton Robustness Noise measurement Three-dimensional displays Degradation Standards Feature extraction Action recognition skeleton activation map graph convolutional network occlusion jittering |
ISSN号 | 1051-8215 |
DOI | 10.1109/TCSVT.2020.3015051 |
英文摘要 | Current methods for skeleton-based human action recognition usually work with complete skeletons. However, in real scenarios, it is inevitable to capture incomplete or noisy skeletons, which could significantly deteriorate the performance of current methods when some informative joints are occluded or disturbed. To improve the robustness of action recognition models, a multi-stream graph convolutional network (GCN) is proposed to explore sufficient discriminative features spreading over all skeleton joints, so that the distributed redundant representation reduces the sensitivity of the action models to non-standard skeletons. Concretely, the backbone GCN is extended by a series of ordered streams which is responsible for learning discriminative features from the joints less activated by preceding streams. Here, the activation degrees of skeleton joints of each GCN stream are measured by the class activation maps (CAM), and only the information from the unactivated joints will be passed to the next stream, by which rich features over all active joints are obtained. Thus, the proposed method is termed richly activated GCN (RA-GCN). Compared to the state-of-the-art (SOTA) methods, the RA-GCN achieves comparable performance on the standard NTU RGB+D 60 and 120 datasets. More crucially, on the synthetic occlusion and jittering datasets, the performance deterioration due to the occluded and disturbed joints can be significantly alleviated by utilizing the proposed RA-GCN. |
资助项目 | National Key Research and Development Program of China[2016YFB1001002] ; National Natural Science Foundation of China[61525306] ; National Natural Science Foundation of China[61633021] ; National Natural Science Foundation of China[61721004] ; Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project)[2019JZZY010119] ; Artificial Intelligence Research, Chinese Academy of Sciences (CAS-AIR)[2019-001] |
WOS研究方向 | Engineering |
语种 | 英语 |
WOS记录号 | WOS:000647394100019 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
资助机构 | National Key Research and Development Program of China ; National Natural Science Foundation of China ; Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project) ; Artificial Intelligence Research, Chinese Academy of Sciences (CAS-AIR) |
源URL | [http://ir.ia.ac.cn/handle/173211/44642] ![]() |
专题 | 自动化研究所_智能感知与计算研究中心 |
通讯作者 | Zhang, Zhang |
作者单位 | 1.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China 3.Shandong Univ Sci & Technol, Coll Elect Engn & Automat, Qingdao 266590, Peoples R China |
推荐引用方式 GB/T 7714 | Song, Yi-Fan,Zhang, Zhang,Shan, Caifeng,et al. Richly Activated Graph Convolutional Network for Robust Skeleton-Based Action Recognition[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2021,31(5):1915-1925. |
APA | Song, Yi-Fan,Zhang, Zhang,Shan, Caifeng,&Wang, Liang.(2021).Richly Activated Graph Convolutional Network for Robust Skeleton-Based Action Recognition.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,31(5),1915-1925. |
MLA | Song, Yi-Fan,et al."Richly Activated Graph Convolutional Network for Robust Skeleton-Based Action Recognition".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 31.5(2021):1915-1925. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。