中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
CAM-RNN: Co-Attention Model Based RNN for Video Captioning

文献类型:期刊论文

作者Zhao, Bin1,2; Li, Xuelong1,2; Lu, Xiaoqiang3
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING
出版日期2019-11
卷号28期号:11
关键词Attention model video captioning recurrent neural network
ISSN号1057-7149;1941-0042
DOI10.1109/TIP.2019.2916757
产权排序3
英文摘要

Video captioning is a technique that bridges vision and language together, for which both visual information and text information are quite important. Typical approaches are based on the recurrent neural network (RNN), where the video caption is generated word by word, and the current word is predicted based on the visual content and previously generated words. However, in the prediction of the current word, there is much uncorrelated visual content, and some of the previously generated words provide little information, which may cause interference in generating a correct caption. Based on this point, we attempt to exploit the visual and text features that are most correlated with the caption. In this paper, a co-attention model based recurrent neural network (CAM-RNN) is proposed, where the CAM is utilized to encode the visual and text features, and the RNN works as the decoder to generate the video caption. Specifically, the CAM is composed of a visual attention module, a text attention module, and a balancing gate. During the generation procedure, the visual attention module is able to adaptively attend to the salient regions in each frame and the frames most correlated with the caption. The text attention module can automatically focus on the most relevant previously generated words or phrases. Moreover, between the two attention modules, a balancing gate is designed to regulate the influence of visual features and text features when generating the caption. In practice, the extensive experiments are conducted on four popular datasets, including MSVD, Charades, MSR-VTT, and MPII-MD, which have demonstrated the effectiveness of the proposed approach.

语种英语
WOS记录号WOS:000484209100003
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
源URL[http://ir.opt.ac.cn/handle/181661/31833]  
专题西安光学精密机械研究所_光学影像学习与分析中心
通讯作者Lu, Xiaoqiang
作者单位1.Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Shaanxi, Peoples R China
2.Northwestern Polytech Univ, Ctr OPT IMagery Anal & Learning OPTIMAL, Xian 710072, Shaanxi, Peoples R China
3.Chinese Acad Sci, Key Lab Spectral Imaging Technol CAS, Xian Inst Opt & Precis Mech, Xian 710119, Shaanxi, Peoples R China
推荐引用方式
GB/T 7714
Zhao, Bin,Li, Xuelong,Lu, Xiaoqiang. CAM-RNN: Co-Attention Model Based RNN for Video Captioning[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2019,28(11).
APA Zhao, Bin,Li, Xuelong,&Lu, Xiaoqiang.(2019).CAM-RNN: Co-Attention Model Based RNN for Video Captioning.IEEE TRANSACTIONS ON IMAGE PROCESSING,28(11).
MLA Zhao, Bin,et al."CAM-RNN: Co-Attention Model Based RNN for Video Captioning".IEEE TRANSACTIONS ON IMAGE PROCESSING 28.11(2019).

入库方式: OAI收割

来源:西安光学精密机械研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。