中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Learning Semantic-Aware Spatial-Temporal Attention for Interpretable Action Recognition

文献类型:期刊论文

作者Fu, Jie3,4; Gao, Junyu2,3; Xu, Changsheng1,3
刊名IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
出版日期2022-08-01
卷号32期号:8页码:5213-5224
关键词Visualization Semantics Task analysis Three-dimensional displays Feature extraction Solid modeling Predictive models Semantic-aware spatial-temporal attention interpretable action recognition
ISSN号1051-8215
DOI10.1109/TCSVT.2021.3137023
通讯作者Xu, Changsheng(csxu@nlpr.ia.ac.cn)
英文摘要Human beings can concentrate on the most semantically relevant visual information when performing action recognition, so as to make reasonable and interpretable predictions. However, most existing approaches, which are applied to address visual tasks, neglect to explicitly imitate such ability for improving the performance and reliability of models. In this paper, we propose an interpretable action recognition framework that can not only improve the performance but also enhance the visual interpretability of 3D CNNs. Specifically, we design a semantic-aware attention module to learn correlative spatial-temporal attention for different action categories. To further leverage the rich semantics of features extracted from different layers, we design a hierarchical semantic fusion module with the help of the learned attention. The proposed two modules can enhance and complement each other, meanwhile, the semantic-aware attention module enjoys the plug-and-play merit. We evaluate our method on different benchmarks with comprehensive ablation studies and visualization analysis. Experimental results demonstrate the effectiveness of our method, showing favorable accuracy against state-of-the-arts while enhancing the semantic interpretability (Code will be available at this link https://github.com/PHDJieFu).
资助项目National Key Research and Development Plan of China[2020AAA0106200] ; National Natural Science Foundation of China[62036012] ; National Natural Science Foundation of China[61721004] ; National Natural Science Foundation of China[62102415] ; National Natural Science Foundation of China[62072286] ; National Natural Science Foundation of China[61720106006] ; National Natural Science Foundation of China[61832002] ; National Natural Science Foundation of China[62072455] ; National Natural Science Foundation of China[62002355] ; National Natural Science Foundation of China[U1836220] ; National Natural Science Foundation of China[U1705262] ; Key Research Program of Frontier Sciences of the Chinese Academy of Sciences (CAS)[QYZDJSSW-JSC039] ; Beijing Natural Science Foundation[L201001]
WOS研究方向Engineering
语种英语
WOS记录号WOS:000835828500026
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构National Key Research and Development Plan of China ; National Natural Science Foundation of China ; Key Research Program of Frontier Sciences of the Chinese Academy of Sciences (CAS) ; Beijing Natural Science Foundation
源URL[http://ir.ia.ac.cn/handle/173211/49812]  
专题自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队
通讯作者Xu, Changsheng
作者单位1.Peng Cheng Lab, Shenzhen 518066, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
3.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
4.Zhengzhou Univ, Sch Informat Engn, Zhengzhou 450001, Peoples R China
推荐引用方式
GB/T 7714
Fu, Jie,Gao, Junyu,Xu, Changsheng. Learning Semantic-Aware Spatial-Temporal Attention for Interpretable Action Recognition[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2022,32(8):5213-5224.
APA Fu, Jie,Gao, Junyu,&Xu, Changsheng.(2022).Learning Semantic-Aware Spatial-Temporal Attention for Interpretable Action Recognition.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,32(8),5213-5224.
MLA Fu, Jie,et al."Learning Semantic-Aware Spatial-Temporal Attention for Interpretable Action Recognition".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 32.8(2022):5213-5224.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。