Densely Connected Attention Flow for Visual Question Answering
文献类型:会议论文
作者 | Liu, Fei1,2![]() ![]() ![]() |
出版日期 | 2019 |
会议日期 | 2019-8 |
会议地点 | 中国澳门 |
英文摘要 | Learning effective interactions between multimodal features is at the heart of visual question answering (VQA). A common defect of the existing VQA approaches is that they only consider a very limited amount of interactions, which may be not enough to model latent complex imagequestion relations that are necessary for accurately answering questions. Therefore, in this paper, we propose a novel DCAF (Densely Connected Attention Flow) framework for modeling dense interactions. It densely connects all pairwise layers of the network via Attention Connectors, capturing fine-grained interplay between image and question across all hierarchical levels. The proposed Attention Connector efficiently connects the multi-modal features at any two layers with symmetric co-attention, and produces interaction-aware attention features. Experimental results on three publicly available datasets show that the proposed method achieves state-of-the-art performance. |
会议录出版者 | IJCAI |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/48557] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_图像与视频分析团队 |
通讯作者 | Liu, Jing |
作者单位 | 1.University of Chinese Academy of Sciences 2.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences 3.School of Computer and Information, Hefei University of Technology |
推荐引用方式 GB/T 7714 | Liu, Fei,Liu, Jing,Fang, Zhiwei,et al. Densely Connected Attention Flow for Visual Question Answering[C]. 见:. 中国澳门. 2019-8. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。