中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Cross-Modal Retrieval via Deep and Bidirectional Representation Learning

文献类型:期刊论文

作者He, Yonghao1; Xiang, Shiming1; Kang, Cuicui2; Wang, Jian1; Pan, Chunhong1; Xiang,Shiming
刊名IEEE TRANSACTIONS ON MULTIMEDIA
出版日期2016-07-01
卷号18期号:7页码:1363-1377
关键词Bidirectional Modeling Convolutional Neural Network Cross-modal Retrieval Representation Learning Word Embedding
DOI10.1109/TMM.2016.2558463
文献子类Article
英文摘要Cross-modal retrieval emphasizes understanding inter-modality semantic correlations, which is often achieved by designing a similarity function. Generally, one of the most important things considered by the similarity function is how to make the cross-modal similarity computable. In this paper, a deep and bidirectional representation learning model is proposed to address the issue of image-text cross-modal retrieval. Owing to the solid progress of deep learning in computer vision and natural language processing, it is reliable to extract semantic representations from both raw image and text data by using deep neural networks. Therefore, in the proposed model, two convolution-based networks are adopted to accomplish representation learning for images and texts. By passing the networks, images and texts are mapped to a common space, in which the cross-modal similarity is measured by cosine distance. Subsequently, a bidirectional network architecture is designed to capture the property of the cross-modal retrieval-the bidirectional search. Such architecture is characterized by simultaneously involving the matched and unmatched image-text pairs for training. Accordingly, a learning framework with maximum likelihood criterion is finally developed. The network parameters are optimized via backpropagation and stochastic gradient descent. A great deal of experiments are conducted to sufficiently evaluate the proposed method on three publicly released datasets: IAPRTC-12, Flickr30k, and Flickr8k. The overall results definitely show that the proposed architecture is effective and the learned representations have good semantics to achieve superior cross-modal retrieval performance.
WOS关键词MODELS
WOS研究方向Computer Science ; Telecommunications
语种英语
WOS记录号WOS:000379752600012
资助机构National Basic Research Program of China(2012CB316304) ; Strategic Priority Research Program of the CAS(XDB02060009) ; National Natural Science Foundation of China(61272331 ; Beijing Natural Science Foundation(4162064) ; 91338202)
源URL[http://ir.ia.ac.cn/handle/173211/11656]  
专题自动化研究所_模式识别国家重点实验室_遥感图像处理团队
通讯作者Xiang,Shiming
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Chinese Acad Sci, Inst Informat Engn, Beijing 100093, Peoples R China
推荐引用方式
GB/T 7714
He, Yonghao,Xiang, Shiming,Kang, Cuicui,et al. Cross-Modal Retrieval via Deep and Bidirectional Representation Learning[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2016,18(7):1363-1377.
APA He, Yonghao,Xiang, Shiming,Kang, Cuicui,Wang, Jian,Pan, Chunhong,&Xiang,Shiming.(2016).Cross-Modal Retrieval via Deep and Bidirectional Representation Learning.IEEE TRANSACTIONS ON MULTIMEDIA,18(7),1363-1377.
MLA He, Yonghao,et al."Cross-Modal Retrieval via Deep and Bidirectional Representation Learning".IEEE TRANSACTIONS ON MULTIMEDIA 18.7(2016):1363-1377.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。