中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
STAM: A SpatioTemporal Attention Based Memory for Video Prediction

文献类型:期刊论文

作者Chang, Zheng1,2,4; Zhang, Xinfeng3; Wang, Shanshe2; Ma, Siwei2; Gao, Wen2
刊名IEEE TRANSACTIONS ON MULTIMEDIA
出版日期2023
卷号25页码:2354-2367
关键词Global spatiotemporal information spatio temporal receptive field 3D convolutional neural network spatiotemporal attention sequence learning video prediction
ISSN号1520-9210
DOI10.1109/TMM.2022.3146721
英文摘要Video prediction has always been a very challenging problem in video representation learning due to the complexity in spatial structure and temporal variation. However, existing methods mainly predict videos by employing language-based memory structures from the traditional Long Short-Term Memories (LSTMs) or Gated Recurrent Units (GRUs), which may not be powerful enough to model the long-term dependencies in videos, consisting of much more complex spatiotemporal dynamics than sentences. In this paper, we propose a SpatioTemporal Attention based Memory (STAM), which can efficiently improve the long-term spatiotemporal memorizing capacity by incorporating the global spatiotemporal information in videos. In the temporal domain, the proposed STAM aims to observe temporal states from a wider temporal receptive field to capture accurate global motion information. In the spatial domain, the proposed STAM aims to jointly utilize both the high-level semantic spatial state and the low-level texture spatial states to model a more reliable global spatial representation for videos. In particular, the global spatiotemporal information is extracted with the help of an Efficient SpatioTemporal Attention Gate (ESTAG), which can adaptively apply different levels of attention scores to different spatiotemporal states according to their importance. Moreover, the proposed STAM are built with 3D convolutional layers due to their advantages in modeling spatiotemporal dynamics for videos. Experimental results show that the proposed STAM can achieve state-of-the-art performance on widely used datasets by leveraging the proposed spatiotemporal representations for videos.
资助项目National Natural Science Foundation of China[62025101] ; National Natural Science Foundation of China[62072008] ; National Natural Science Foundation of China[62071449] ; National Natural Science Foundation of China[U20A20184] ; Fundamental Research Funds for the Central Universities ; High-performance Computing Platform of Peking University
WOS研究方向Computer Science ; Telecommunications
语种英语
WOS记录号WOS:001007432100058
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
源URL[http://119.78.100.204/handle/2XEOYT63/21265]  
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Ma, Siwei
作者单位1.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
2.Peking Univ, Natl Engn Lab Video Technol, Beijing 100871, Peoples R China
3.Univ Chinese Acad Sci, Sch Comp Sci & Technol, Beijing 100871, Peoples R China
4.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Chang, Zheng,Zhang, Xinfeng,Wang, Shanshe,et al. STAM: A SpatioTemporal Attention Based Memory for Video Prediction[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2023,25:2354-2367.
APA Chang, Zheng,Zhang, Xinfeng,Wang, Shanshe,Ma, Siwei,&Gao, Wen.(2023).STAM: A SpatioTemporal Attention Based Memory for Video Prediction.IEEE TRANSACTIONS ON MULTIMEDIA,25,2354-2367.
MLA Chang, Zheng,et al."STAM: A SpatioTemporal Attention Based Memory for Video Prediction".IEEE TRANSACTIONS ON MULTIMEDIA 25(2023):2354-2367.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。