Retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning
文献类型:期刊论文
作者 | Xu, Chunpu1; Yang, Min1; Ao, Xiang2; Shen, Ying3; Xu, Ruifeng4; Tian, Jinwen5 |
刊名 | KNOWLEDGE-BASED SYSTEMS
![]() |
出版日期 | 2021-02-28 |
卷号 | 214页码:10 |
关键词 | Image paragraph captioning Key-value memory network Adversarial training |
ISSN号 | 0950-7051 |
DOI | 10.1016/j.knosys.2020.106730 |
英文摘要 | Existing image paragraph captioning methods generate long paragraph captions solely from input images, relying on insufficient information. In this paper, we propose a retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning (RAMP), which makes full use of the R-best retrieved candidate captions to enhance the image paragraph captioning via adversarial training. Concretely, RAMP treats the retrieved captions as reference captions to augment the discriminator during adversarial training, encouraging the image captioning model (generator) to incorporate informative content in retrieved captions into the generated caption. In addition, a retrieval-enhanced dynamic memory-augmented attention network is devised to keep track of the coverage information and attention history along with the update-chain of the decoder state, and therefore avoiding generating repetitive or incomplete image descriptions. Finally, a copying mechanism is applied to select words from the retrieved candidate captions, which are then put into the proper positions of the target caption so as to improve the fluency and informativeness of the generated caption. Extensive experiments on a benchmark dataset (i.e., Stanford) demonstrate that the proposed RAMP model significantly outperforms the state-of-the-art methods across multiple evaluation metrics. For reproducibility, we submit the code and data at https://github.com/anonymous-caption/RAMP. (C) 2020 Elsevier B.V. All rights reserved. |
资助项目 | National Natural Science Foundation of China[61906185] ; Natural Science Foundation of Guangdong Province of China[2019A1515011705] ; Shenzhen Science and Technology Innovation Program, China[KQTD20190929172835662] ; Youth Innovation Promotion Association of CAS China, China ; Shenzhen Basic Research Foundation, China[JCYJ20200109113441941] |
WOS研究方向 | Computer Science |
语种 | 英语 |
WOS记录号 | WOS:000618605200010 |
出版者 | ELSEVIER |
源URL | [http://119.78.100.204/handle/2XEOYT63/16170] ![]() |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Yang, Min |
作者单位 | 1.Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen Key Lab High Performance Data Min, Shenzhen, Guangdong, Peoples R China 2.Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China 3.Sun Yat Sen Univ, Sch Intelligent Engn, Guangzhou, Guangdong, Peoples R China 4.Harbin Inst Technol, Shenzhen, Peoples R China 5.Huazhong Univ Sci & Technol, Wuhan, Hubei, Peoples R China |
推荐引用方式 GB/T 7714 | Xu, Chunpu,Yang, Min,Ao, Xiang,et al. Retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning[J]. KNOWLEDGE-BASED SYSTEMS,2021,214:10. |
APA | Xu, Chunpu,Yang, Min,Ao, Xiang,Shen, Ying,Xu, Ruifeng,&Tian, Jinwen.(2021).Retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning.KNOWLEDGE-BASED SYSTEMS,214,10. |
MLA | Xu, Chunpu,et al."Retrieval-enhanced adversarial training with dynamic memory-augmented attention for image paragraph captioning".KNOWLEDGE-BASED SYSTEMS 214(2021):10. |
入库方式: OAI收割
来源:计算技术研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。