中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
End-to-end Video-level Representation Learning for Action Recognition

文献类型:会议论文

作者Zhu, Jiagang1,2; Zhu, Zheng1,2; Zou, Wei1
出版日期2018-11
会议日期August 20-24, 2018
会议地点Beijing, China
英文摘要

From the frame/clip-level feature learning to the video-level representation building, deep learning methods in action recognition have developed rapidly in recent years. However, current methods suffer from the confusion caused by partial observation training, or without end-to-end learning, or restricted to single temporal scale modeling and so on. In this paper, we build upon two-stream ConvNets and propose Deep networks with Temporal Pyramid Pooling (DTPP), an end-to-end video-level representation learning approach, to address these problems. Specifically, at first, RGB images and optical flow stacks are sparsely sampled across the whole video. Then a temporal pyramid pooling layer is used to aggregate the frame-level features which consist of spatial and temporal cues. Lastly, the trained model has compact video-level representation with multiple temporal scales, which is both global and sequence-aware. Experimental results show that DTPP achieves the state-of-the-art performance on two challenging video action datasets: UCF101 and HMDB51, either by ImageNet pre-training or Kinetics pre-training.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/39107]  
专题精密感知与控制研究中心_精密感知与控制
通讯作者Zou, Wei
作者单位1.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Zhu, Jiagang,Zhu, Zheng,Zou, Wei. End-to-end Video-level Representation Learning for Action Recognition[C]. 见:. Beijing, China. August 20-24, 2018.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。