Read, Watch, Listen, and Summarize: Multi-Modal Summarization for Asynchronous Text, Image, Audio and Video
文献类型:期刊论文
作者 | Li, Haoran1,2; Zhu, Junnan1,2![]() ![]() ![]() ![]() |
刊名 | IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
![]() |
出版日期 | 2019-05-01 |
卷号 | 31期号:5页码:996-1009 |
关键词 | Summarization multimedia multi-modal cross-modal natural language processing computer vision |
ISSN号 | 1041-4347 |
DOI | 10.1109/TKDE.2018.2848260 |
英文摘要 | Automatic text summarization is a fundamental natural language processing (NLP) application that aims to condense a source text into a shorter version. The rapid increase in multimedia data transmission over the Internet necessitates multi-modal summarization (MMS) from asynchronous collections of text, image, audio, and video. In this work, we propose an extractive MMS method that unites the techniques of NLP, speech processing, and computer vision to explore the rich information contained in multi-modal data and to improve the quality of multimedia news summarization. The key idea is to bridge the semantic gaps between multi-modal content. Audio and visual are main modalities in the video. For audio information, we design an approach to selectively use its transcription and to infer the salience of the transcription with audio signals. For visual information, we learn the joint representations of text and images using a neural network. Then, we capture the coverage of the generated summary for important visual information through text-image matching or multi-modal topic modeling. Finally, all the multi-modal aspects are considered to generate a textual summary by maximizing the salience, non-redundancy, readability, and coverage through the budgeted optimization of submodular functions. We further introduce a publicly available MMS corpus in English and Chinese. 1 The experimental results obtained on our dataset demonstrate that our methods based on image matching and image topic framework outperform other competitive baseline methods. |
资助项目 | Natural Science Foundation of China[61333018] ; Natural Science Foundation of China[61673380] |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:000466933000013 |
出版者 | IEEE COMPUTER SOC |
源URL | [http://ir.ia.ac.cn/handle/173211/24573] ![]() |
专题 | 模式识别国家重点实验室_自然语言处理 |
通讯作者 | Li, Haoran |
作者单位 | 1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China 3.Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100864, Peoples R China 4.Chinese Acad Sci, CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100864, Peoples R China 5.Univ Chinese Acad Sci, Beijing 100049, Peoples R China |
推荐引用方式 GB/T 7714 | Li, Haoran,Zhu, Junnan,Ma, Cong,et al. Read, Watch, Listen, and Summarize: Multi-Modal Summarization for Asynchronous Text, Image, Audio and Video[J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,2019,31(5):996-1009. |
APA | Li, Haoran,Zhu, Junnan,Ma, Cong,Zhang, Jiajun,&Zong, Chengqing.(2019).Read, Watch, Listen, and Summarize: Multi-Modal Summarization for Asynchronous Text, Image, Audio and Video.IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING,31(5),996-1009. |
MLA | Li, Haoran,et al."Read, Watch, Listen, and Summarize: Multi-Modal Summarization for Asynchronous Text, Image, Audio and Video".IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 31.5(2019):996-1009. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。