中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Multi-modal spatial relational attention networks for visual question answering

文献类型:期刊论文

作者Yao, Haibo1; Wang, Lipeng1; Cai, Chengtao1; Sun, Yuxin1; Zhang, Zhi1; Luo, Yongkang2
刊名IMAGE AND VISION COMPUTING
出版日期2023-12-01
卷号140页码:13
ISSN号0262-8856
关键词Visual question answering Spatial relation Attention mechanism Pre -training strategy
DOI10.1016/j.imavis.2023.104840
通讯作者Wang, Lipeng(wanglipeng@hrbeu.edu.cn)
英文摘要Visual Question Answering (VQA) is a task that requires VQA model to fully understand the visual information of the image and the language information of the question, and then combine both to provide an answer. Recently, a large amount of VQA approaches focus on modeling intra- and inter-modal interactions with respect to vision and language using a deep modular co-attention network, which can achieve a good performance. Despite their benefits, they also have their limitations. First, the question representation is obtained through Glove word embeddings and Recurrent Neural Network, which may not be sufficient to capture the intricate semantics of the question features. Second, they mostly use visual appearance features extracted by Faster R-CNN to interact with language features, and they ignore important spatial relations between objects in images, resulting in incomplete use of image information. To overcome the limitations of previous methods, we propose a novel Multi-modal Spatial Relation Attention Network (MSRAN) for VQA, which can introduce spatial relationships between objects to fully utilize the image information, thus improving the performance of VQA. In order to achieve the above, we design two types of spatial relational attention modules to comprehensively explore the attention schemes: (i) Self-Attention based on Explicit Spatial Relation (SA-ESR) module that explores geometric relationships between objects explicitly; and (ii) Self-Attention based on Implicit Spatial Relation (SA-ISR) module that can capture the hidden dynamic relationships between objects by using spatial relationship. Moreover, the pre-training model BERT, which replaces Glove word embeddings and Recurrent Neural Network, is applied to MSRAN in order to obtain the better question representation. Extensive experiments on two large benchmark datasets, VQA 2.0 and GQA, demonstrate that our proposed model achieves the state-of-the-art performance.
资助项目National Natural Science Foundation of China[62173103] ; National Natural Science Foundation of China[62303129] ; Fundamental Research Funds for the Central Universities of China[3072022JC0402] ; Fundamental Research Funds for the Central Universities of China[3072022JC0403] ; Natural Science Foundation of Heilongjiang Province of China[LH2023F022] ; National Key Research and Development Program of China[2019YFE0105400] ; Project of Intelligent Situation Awareness System for Smart Ship[MC-201920-X01]
WOS研究方向Computer Science ; Engineering ; Optics
语种英语
出版者ELSEVIER
WOS记录号WOS:001102256600001
资助机构National Natural Science Foundation of China ; Fundamental Research Funds for the Central Universities of China ; Natural Science Foundation of Heilongjiang Province of China ; National Key Research and Development Program of China ; Project of Intelligent Situation Awareness System for Smart Ship
源URL[http://ir.ia.ac.cn/handle/173211/55116]  
专题多模态人工智能系统全国重点实验室
通讯作者Wang, Lipeng
作者单位1.Harbin Engn Univ, Coll Intelligent Syst Sci & Engn, Harbin 150001, Peoples R China
2.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Yao, Haibo,Wang, Lipeng,Cai, Chengtao,et al. Multi-modal spatial relational attention networks for visual question answering[J]. IMAGE AND VISION COMPUTING,2023,140:13.
APA Yao, Haibo,Wang, Lipeng,Cai, Chengtao,Sun, Yuxin,Zhang, Zhi,&Luo, Yongkang.(2023).Multi-modal spatial relational attention networks for visual question answering.IMAGE AND VISION COMPUTING,140,13.
MLA Yao, Haibo,et al."Multi-modal spatial relational attention networks for visual question answering".IMAGE AND VISION COMPUTING 140(2023):13.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。