Spatial attention based visual semantic learning for action recognition in still images
文献类型:期刊论文
作者 | Zheng, Yunpeng1,2; Zheng, Xiangtao2![]() ![]() |
刊名 | NEUROCOMPUTING
![]() |
出版日期 | 2020-11-06 |
卷号 | 413页码:383-396 |
关键词 | Still image-based action recognition Spatial attention Semantic parts Deep learning |
ISSN号 | 0925-2312;1872-8286 |
DOI | 10.1016/j.neucom.2020.07.016 |
产权排序 | 1 |
英文摘要 | Visual semantic parts play crucial roles in still image-based action recognition. A majority of existing methods require additional manual annotations such as human bounding boxes and predefined body parts besides action labels to learn action related visual semantic parts. However, labeling these manual annotations is rather time-consuming and labor-intensive. Moreover, not all manual annotations are effective when recognizing a specific action. Some of them can be irrelevant and even misguided. To address these limitations, this paper proposes a multi-stage deep learning method called Spatial Attention based Action Mask Networks (SAAM-Nets). The proposed method does not need any additional annotations besides action labels to obtain action-specific visual semantic parts. Instead, we propose a spatial attention layer injected in a convolutional neural network to create a specific action mask for each image with only action labels. Moreover, based on the action mask, we propose a region selection strategy to generate a semantic bounding box containing action-specific semantic parts. Furthermore, to effectively combine the information of the whole scene and the sematic box, two feature attention layers are adopted to obtain more discriminative representations. Experiments on four benchmark datasets have demonstrated that the proposed method can achieve promising performance compared with state-of-the-art methods. (C) 2020 Elsevier B.V. All rights reserved. |
语种 | 英语 |
WOS记录号 | WOS:000579803700032 |
出版者 | ELSEVIER |
源URL | [http://ir.opt.ac.cn/handle/181661/93762] ![]() |
专题 | 西安光学精密机械研究所_光学影像学习与分析中心 |
通讯作者 | Zheng, Xiangtao |
作者单位 | 1.Univ Chinese Acad Sci, Beijing 100049, Peoples R China 2.Chinese Acad Sci, Xian Inst Opt & Precis Mech, Key Lab Spectral Imaging Technol CAS, Xian 710119, Shaanxi, Peoples R China |
推荐引用方式 GB/T 7714 | Zheng, Yunpeng,Zheng, Xiangtao,Lu, Xiaoqiang,et al. Spatial attention based visual semantic learning for action recognition in still images[J]. NEUROCOMPUTING,2020,413:383-396. |
APA | Zheng, Yunpeng,Zheng, Xiangtao,Lu, Xiaoqiang,&Wu, Siyuan.(2020).Spatial attention based visual semantic learning for action recognition in still images.NEUROCOMPUTING,413,383-396. |
MLA | Zheng, Yunpeng,et al."Spatial attention based visual semantic learning for action recognition in still images".NEUROCOMPUTING 413(2020):383-396. |
入库方式: OAI收割
来源:西安光学精密机械研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。