中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Efficient Token-Guided Image-Text Retrieval With Consistent Multimodal Contrastive Training

文献类型:期刊论文

作者Liu, Chong4; Zhang, Yuqi3; Wang, Hongsong2; Chen, Weihua3; Wang, Fan3; Huang, Yan1; Shen, Yi-Dong4; Wang, Liang1
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING
出版日期2023
卷号32页码:3622-3633
ISSN号1057-7149
关键词Index Terms-Image-text retrieval multimodal transformer multimodal contrastive training
DOI10.1109/TIP.2023.3286710
通讯作者Wang, Hongsong(hongsongwang@seu.edu.cn) ; Chen, Weihua(kugang.cwh@alibaba-inc.com)
英文摘要Image-text retrieval is a central problem for understanding the semantic relationship between vision and language, and serves as the basis for various visual and language tasks. Most previous works either simply learn coarse-grained representations of the overall image and text, or elaborately establish the correspondence between image regions or pixels and text words. However, the close relations between coarse- and fine-grained representations for each modality are important for image-text retrieval but almost neglected. As a result, such previous works inevitably suffer from low retrieval accuracy or heavy computational cost. In this work, we address image-text retrieval from a novel perspective by combining coarse- and fine-grained representation learning into a unified framework. This framework is consistent with human cognition, as humans simultaneously pay attention to the entire sample and regional elements to understand the semantic content. To this end, a Token-Guided Dual Transformer (TGDT) architecture which consists of two homogeneous branches for image and text modalities, respectively, is proposed for image-text retrieval. The TGDT incorporates both coarse- and fine-grained retrievals into a unified framework and beneficially leverages the advantages of both retrieval approaches. A novel training objective called Consistent Multimodal Contrastive (CMC) loss is proposed accordingly to ensure the intra- and inter-modal semantic consistencies between images and texts in the common embedding space. Equipped with a two-stage inference method based on the mixed global and local cross-modal similarity, the proposed method achieves state-of-the-art retrieval performances with extremely low inference time when compared with representative recent approaches. Code is publicly available: github.com/LCFractal/TGDT.
资助项目Southeast University Start-Up Grant for New Faculty[RF1028623063] ; National Key Research and Development Program of China[2022ZD0117900] ; National Natural Science Foundation of China[62236010] ; National Natural Science Foundation of China[62276261]
WOS研究方向Computer Science ; Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:001024111100002
资助机构Southeast University Start-Up Grant for New Faculty ; National Key Research and Development Program of China ; National Natural Science Foundation of China
源URL[http://ir.ia.ac.cn/handle/173211/53753]  
专题多模态人工智能系统全国重点实验室
通讯作者Wang, Hongsong; Chen, Weihua
作者单位1.Chinese Acad Sci CASIA, Inst Automat, Ctr Res Intelligent Percept & Comp CRIPAC, Natl Lab Pattern Recognit NLPR, Beijing 100190, Peoples R China
2.Southeast Univ, Dept Comp Sci & Engn, Nanjing 210096, Peoples R China
3.Alibaba Grp, Beijing 100102, Peoples R China
4.Chinese Acad Sci, Inst Software, State Key Lab Comp Sci, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Liu, Chong,Zhang, Yuqi,Wang, Hongsong,et al. Efficient Token-Guided Image-Text Retrieval With Consistent Multimodal Contrastive Training[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2023,32:3622-3633.
APA Liu, Chong.,Zhang, Yuqi.,Wang, Hongsong.,Chen, Weihua.,Wang, Fan.,...&Wang, Liang.(2023).Efficient Token-Guided Image-Text Retrieval With Consistent Multimodal Contrastive Training.IEEE TRANSACTIONS ON IMAGE PROCESSING,32,3622-3633.
MLA Liu, Chong,et al."Efficient Token-Guided Image-Text Retrieval With Consistent Multimodal Contrastive Training".IEEE TRANSACTIONS ON IMAGE PROCESSING 32(2023):3622-3633.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。