中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Context-Aware Attention Network for Image-Text Retrieval

文献类型:会议论文

作者Qi Zhang2,3; Zhen Lei2,3; Zhaoxiang Zhang2,3; Stan Z. Li1; Zhang, Zhaoxiang; Lei, Zhen; Zhang, Qi
出版日期2020-06-14
会议日期2020-6-14
会议地点Seattle, Washington, USA
英文摘要

As a typical cross-modal problem, image-text bidirectional retrieval relies heavily on the joint embedding learning and similarity measure for each image-text pair. It remains challenging because prior works seldom explore semantic correspondences between modalities and semantic correlations in a single modality at the same time. In this work, we propose a unified Context-Aware Attention Network (CAAN), which selectively focuses on critical local fragments (regions and words) by aggregating the global context. Specifically, it simultaneously utilizes global intermodal alignments and intra-modal correlations to discover latent semantic relations. Considering the interactions between images and sentences in the retrieval process, intramodal correlations are derived from the second-order attention of region-word alignments instead of intuitively comparing the distance between original features. Our method achieves fairly competitive results on two generic image-text retrieval datasets Flickr30K and MS-COCO.

源URL[http://ir.ia.ac.cn/handle/173211/39252]  
专题自动化研究所_模式识别国家重点实验室_生物识别与安全技术研究中心
通讯作者Zhen Lei; Lei, Zhen
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
2.NLPR, Institute of Automation, Chinese Academy of Sciences, Beijing, China
3.Center for AI Research and Innovation, Westlake University, Hangzhou, China
推荐引用方式
GB/T 7714
Qi Zhang,Zhen Lei,Zhaoxiang Zhang,et al. Context-Aware Attention Network for Image-Text Retrieval[C]. 见:. Seattle, Washington, USA. 2020-6-14.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。