中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Decoupled Spatial-Temporal Attention Network for Skeleton-Based Action-Gesture Recognition

文献类型:会议论文

作者Shi L(史磊)1,3; Zhang YF(张一帆)1,3; Cheng J(程健)1,2,3; Lu HQ(卢汉清)1,3
出版日期2020
会议日期2020
会议地点日本京都
英文摘要

Dynamic skeletal data, represented as the 2D/3D coordinates of human joints, has been widely studied for human action recognition due to its high-level semantic information and environmental robustness. However, previous methods heavily rely on designing hand-crafted traversal rules or graph topologies to draw dependencies between the joints, which are limited in performance and generalizability. In this work, we present a novel decoupled spatial-temporal attention network (DSTA-Net) for skeleton-based action recognition. It involves solely the attention blocks, allowing for modeling spatial-temporal dependencies between joints without the requirement of knowing their positions or mutual connections. Specifically, to meet the specific requirements of the skeletal data, three techniques are proposed for building attention blocks, namely, spatial-temporal attention decoupling, decoupled position encoding and spatial global regularization. Besides, from the data aspect, we introduce a skeletal data decoupling technique to emphasize the specific characteristics of space/time and different motion scales, resulting in a more comprehensive understanding of the human actions. To test the effectiveness of the proposed method, extensive experiments are conducted on four challenging datasets for skeleton-based gesture and action recognition, namely, SHREC, DHG, NTU-60 and NTU-120, where DSTA-Net achieves state-of-the-art performance on all of them.

会议录出版者IEEE Computer Society
语种英语
源URL[http://ir.ia.ac.cn/handle/173211/44377]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
作者单位1.NLPR & AIRIA, Institute of Automation
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.CAS Center for Excellence in Brain Science and Intelligence Technology
推荐引用方式
GB/T 7714
Shi L,Zhang YF,Cheng J,et al. Decoupled Spatial-Temporal Attention Network for Skeleton-Based Action-Gesture Recognition[C]. 见:. 日本京都. 2020.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。