中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Syntax-Guided Hierarchical Attention Network for Video Captioning

文献类型:期刊论文

作者Deng, Jincan3,4; Li, Liang3,4; Zhang, Beichen1,2; Wang, Shuhui3,4; Zha, Zhengjun5; Huang, Qingming1,2,3,4
刊名IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
出版日期2022-02-01
卷号32期号:2页码:880-892
关键词Syntactics Feature extraction Visualization Generators Semantics Two dimensional displays Three-dimensional displays Video captioning syntax attention content attention global sentence-context
ISSN号1051-8215
DOI10.1109/TCSVT.2021.3063423
英文摘要Video captioning is a challenging task that aims to generate linguistic description based on video content. Most methods only incorporate visual features (2D/3D) as input for generating visual and non-visual words in the caption. However, generating non-visual words usually depends more on sentence-context than visual features. The wrong non-visual words can reduce the sentence fluency and even change the meaning of sentence. In this paper, we propose a syntax-guided hierarchical attention network (SHAN), which leverages semantic and syntax cues to integrate visual and sentence-context features for captioning. First, a globally-dependent context encoder is designed to extract the global sentence-context feature that facilitates generating non-visual words. Then, we introduce hierarchical content attention and syntax attention to adaptively integrate features in terms of temporality and feature characteristics respectively. Content attention helps focus on time intervals related to the semantic of current word, while cross-modal syntax attention uses syntax information to model importance of different features for target word's generation. Moreover, such hierarchical attention can enhance the model interpretability for captioning. Experiments on MSVD and MSR-VTT datasets show the comparable performance of our method compared with current methods.
资助项目National Key Research and Development Program of China[2017YFB1300201] ; National Natural Science Foundation of China[61771457] ; National Natural Science Foundation of China[61732007] ; National Natural Science Foundation of China[61672497] ; National Natural Science Foundation of China[U19B2038] ; National Natural Science Foundation of China[61620106009] ; National Natural Science Foundation of China[U1636214] ; National Natural Science Foundation of China[61931008] ; National Natural Science Foundation of China[61772494] ; National Natural Science Foundation of China[62022083]
WOS研究方向Engineering
语种英语
WOS记录号WOS:000752017700036
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
源URL[http://119.78.100.204/handle/2XEOYT63/19004]  
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Li, Liang
作者单位1.Univ Chinese Acad Sci, Sch Comp & Control Engn, Beijing 101408, Peoples R China
2.Univ Chinese Acad Sci, Key Lab Big Data Min & Knowledge Management, Beijing 101408, Peoples R China
3.Chinese Acad Sci, Key Lab Intelligent Informat Proc, CAS, Beijing 100190, Peoples R China
4.Chinese Acad Sci, Inst Comp Technol, CAS, Beijing 100190, Peoples R China
5.Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230027, Peoples R China
推荐引用方式
GB/T 7714
Deng, Jincan,Li, Liang,Zhang, Beichen,et al. Syntax-Guided Hierarchical Attention Network for Video Captioning[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2022,32(2):880-892.
APA Deng, Jincan,Li, Liang,Zhang, Beichen,Wang, Shuhui,Zha, Zhengjun,&Huang, Qingming.(2022).Syntax-Guided Hierarchical Attention Network for Video Captioning.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,32(2),880-892.
MLA Deng, Jincan,et al."Syntax-Guided Hierarchical Attention Network for Video Captioning".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 32.2(2022):880-892.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。