Text2Video: An End-to-end Learning Framework for Expressing Text With Videos
文献类型:期刊论文
作者 | Yang, Xiaoshan1,2![]() ![]() ![]() |
刊名 | IEEE TRANSACTIONS ON MULTIMEDIA
![]() |
出版日期 | 2018-09-01 |
卷号 | 20期号:9页码:2360-2370 |
关键词 | Multimedia Storytelling Video Analysis Deep Learning |
DOI | 10.1109/TMM.2018.2807588 |
文献子类 | Article |
英文摘要 | Video creation is a challenging and highly professional task that generally involves substantial manual efforts. To ease this burden, a better approach is to automatically produce new videos based on clips from the massive amount of existing videos according to arbitrary text. In this paper, we formulate video creation as a problem of retrieving a sequence of videos for a sentence stream. To achieve this goal, we propose a novel multimodal recurrent architecture for automatic video production. Compared with existing methods, the proposed model has three major advantages. First, it is the first completely integrated end-to-end deep learning system for real-world production to the best of our knowledge. We are among the first to address the problem of retrieving a sequence of videos for a sentence stream. Second, it can effectively exploit the correspondence between sentences and video clips through semantic consistency modeling. Third, it can model the visual coherence well by requiring that the produced videos should be organized coherently in terms of visual appearance. We have conducted extensive experiments on two applications, including video retrieval and video composition. The qualitative and quantitative results obtained on two public datasets used in the Large Scale Movie Description Challenge 2016 both demonstrate the effectiveness of the proposed model compared with other state-of-the-art algorithms. |
WOS关键词 | ANNOTATION ; REPRESENTATION ; NARRATIVES ; MOVIE ; WEB ; TV |
WOS研究方向 | Computer Science ; Telecommunications |
语种 | 英语 |
WOS记录号 | WOS:000442358200010 |
资助机构 | National Natural Science Foundation of China(61432019 ; Beijing Natural Science Foundation(4172062) ; Key Research Program of Frontier Sciences, CAS(QYZDJ-SSW-JSC039) ; 61572498 ; 61532009 ; 61702511 ; 61720106006 ; 61711530243) |
源URL | [http://ir.ia.ac.cn/handle/173211/20467] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队 |
作者单位 | 1.National Lab of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences 2.University of Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Yang, Xiaoshan,Zhang, Tianzhu,Xu, Changsheng. Text2Video: An End-to-end Learning Framework for Expressing Text With Videos[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2018,20(9):2360-2370. |
APA | Yang, Xiaoshan,Zhang, Tianzhu,&Xu, Changsheng.(2018).Text2Video: An End-to-end Learning Framework for Expressing Text With Videos.IEEE TRANSACTIONS ON MULTIMEDIA,20(9),2360-2370. |
MLA | Yang, Xiaoshan,et al."Text2Video: An End-to-end Learning Framework for Expressing Text With Videos".IEEE TRANSACTIONS ON MULTIMEDIA 20.9(2018):2360-2370. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。