Graph-based Multimodal Ranking Models for Multimodal Summarization
文献类型:期刊论文
作者 | Zhu, Junnan3,4![]() ![]() ![]() ![]() ![]() |
刊名 | ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING
![]() |
出版日期 | 2021-07-01 |
卷号 | 20期号:4页码:21 |
关键词 | Multimodal summarization single-modal multimodal ranking unsupervised |
ISSN号 | 2375-4699 |
DOI | 10.1145/3445794 |
通讯作者 | Zhu, Junnan(junnan.zhu@nlpr.ia.ac.cn) |
英文摘要 | Multimodal summarization aims to extract the most important information from the multimedia input. It is becoming increasingly popular due to the rapid growth of multimedia data in recent years. There are various researches focusing on different multimodal summarization tasks. However, the existing methods can only generate single-modal output or multimodal output. In addition, most of them need a lot of annotated samples for training, which makes it difficult to be generalized to other tasks or domains. Motivated by this, we propose a unified framework for multimodal summarization that can cover both single-modal output summarization and multimodal output summarization. In our framework, we consider three different scenarios and propose the respective unsupervised graph-based multimodal summarization models without the requirement of any manually annotated document-summary pairs for training: (1) generic multimodal ranking, (2) modal-dominated multimodal ranking, and (3) non-redundant text-image multimodal ranking. Furthermore, an image-text similarity estimation model is introduced to measure the semantic similarity between image and text. Experiments show that our proposed models outperform the single-modal summarization methods on both automatic and human evaluation metrics. Besides, our models can also improve the single-modal summarization with the guidance of the multimedia information. This study can be applied as the benchmark for further study on multimodal summarization task. |
WOS研究方向 | Computer Science |
语种 | 英语 |
WOS记录号 | WOS:000721582900007 |
出版者 | ASSOC COMPUTING MACHINERY |
源URL | [http://ir.ia.ac.cn/handle/173211/46440] ![]() |
专题 | 模式识别国家重点实验室_自然语言处理 |
通讯作者 | Zhu, Junnan |
作者单位 | 1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Inst Automat, Natl Lab Pattern Recognit,CAS,Beijing Acad Artifi, Beijing, Peoples R China 2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Inst Automat, Natl Lab Pattern Recognit,CAS,Beijing Fanyu Techn, Beijing, Peoples R China 3.Univ Chinese Acad Sci, Sch Artificial Intelligence, Inst Automat, Natl Lab Pattern Recognit,CAS, Beijing, Peoples R China 4.Intelligence Bldg,95,Zhongguancun East Rd, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Zhu, Junnan,Xiang, Lu,Zhou, Yu,et al. Graph-based Multimodal Ranking Models for Multimodal Summarization[J]. ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING,2021,20(4):21. |
APA | Zhu, Junnan,Xiang, Lu,Zhou, Yu,Zhang, Jiajun,&Zong, Chengqing.(2021).Graph-based Multimodal Ranking Models for Multimodal Summarization.ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING,20(4),21. |
MLA | Zhu, Junnan,et al."Graph-based Multimodal Ranking Models for Multimodal Summarization".ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING 20.4(2021):21. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。