中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
MOSO: Decomposing MOtion, Scene and Object for Video Prediction

文献类型:会议论文

作者Sun, Mingzhen1,2; Wang, Weining2; Zhu, Xinxin2; Liu, Jing1,2
出版日期2023
会议日期2023-6-18
会议地点Vancouver, Canada
英文摘要

Motion, scene and object are three primary visual components of a video. In particular, objects represent the foreground, scenes represent the background, and motion traces their dynamics. Based on this insight, we propose a twostage MOtion, Scene and Object decomposition framework (MOSO)1 for video prediction, consisting of MOSO-VQVAE and MOSO-Transformer. In the first stage, MOSO-VQVAE decomposes a previous video clip into the motion, scene and object components, and represents them as distinct groups of discrete tokens. Then, in the second stage, MOSOTransformer predicts the object and scene tokens of the subsequent video clip based on the previous tokens and adds dynamic motion at the token level to the generated object and scene tokens. Our framework can be easily extended to unconditional video generation and video frame interpolation tasks. Experimental results demonstrate that our method achieves new state-of-the-art performance on five challenging benchmarks for video prediction and unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101. In addition, MOSO can produce realistic videos by combining objects and scenes from different videos.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/51620]  
专题紫东太初大模型研究中心
通讯作者Liu, Jing
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Sun, Mingzhen,Wang, Weining,Zhu, Xinxin,et al. MOSO: Decomposing MOtion, Scene and Object for Video Prediction[C]. 见:. Vancouver, Canada. 2023-6-18.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。