中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
ED-T2V: An Efficient Training Framework for Diffusion-based Text-to-Video Generation

文献类型:会议论文

作者Liu, Jiawei1,2; Wang, Weining2; Liu, Wei3; He, Qian3; Liu, Jing1,2
出版日期2023
会议日期2023-6-18
会议地点Queensland, Australia
英文摘要

Diffusion models have achieved remarkable performance on image generation. However, It is difficult to reproduce this success on video generation because of expensive training cost. In fact, pretrained image generation models have already acquired visual generation capabilities and could be utilized for video generation. Thus, we propose an Efficient training framework for Diffusion-based Text-to-Video generation (EDT2V), which is built on a pretrained text-to-image generation model. To model the temporal dynamic information, we propose temporal transformer blocks with novel identity attention and temporal cross-attention. ED-T2V has the following advantages: 1) most of the parameters of pretrained model are frozen to inherit the generation capabilities and reduce the training cost; 2) the identity attention requires the currently generated frame to attend to all positions of its previous frame, thus providing an efficient way to keep main content consistent across frames and enable movement generation; 3) the temporal cross-attention is proposed to construct associations between textual descriptions and multiple video tokens in the time dimension, which could better model video movement than traditional cross-attention methods. With the aforementioned benefits, ED-T2V not only significantly reduces the training cost of video diffusion models, but also has excellent generation fidelity and controllability.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/51621]  
专题紫东太初大模型研究中心
通讯作者Liu, Jing
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences
3.ByteDance Inc
推荐引用方式
GB/T 7714
Liu, Jiawei,Wang, Weining,Liu, Wei,et al. ED-T2V: An Efficient Training Framework for Diffusion-based Text-to-Video Generation[C]. 见:. Queensland, Australia. 2023-6-18.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。