中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Instance-aware Image and Sentence Matching with Selective Multimodal LSTM

文献类型:会议论文

AuthorHuang Yan(黄岩)1,2; Wang We(王威)1,2; Wang Liang(王亮)1,2
Issued Date2017-08
Conference Date2017.7.20
Conference PlaceUSA
English Abstract
Effective image and sentence matching depends on how
to well measure their global visual-semantic similarity.
Based on the observation that such a global similarity arises
from a complex aggregation of multiple local similarities
between pairwise instances of image (objects) and sentence
(words), we propose a selective multimodal Long Short-
Term Memory network (sm-LSTM) for instance-aware image
and sentence matching. The sm-LSTM includes a multimodal
context-modulated attention scheme at each timestep
that can selectively attend to a pair of instances of image
and sentence, by predicting pairwise instance-aware
saliency maps for image and sentence. For selected pairwise
instances, their representations are obtained based on
the predicted saliency maps, and then compared to measure
their local similarity. By similarly measuring multiple local
similarities within a few timesteps, the sm-LSTM sequentially
aggregates them with hidden states to obtain a final
matching score as the desired global similarity. Extensive
experiments show that our model can well match image and
sentence with complex content, and achieve the state-of-theart
results on two public benchmark datasets.
源URL[http://ir.ia.ac.cn/handle/173211/14818]  
Collection自动化研究所_智能感知与计算研究中心
Affiliation1.中国科学院自动化研究所
2.中国科学院大学
Recommended Citation
GB/T 7714
Huang Yan,Wang We,Wang Liang. Instance-aware Image and Sentence Matching with Selective Multimodal LSTM[C]. 见:. USA. 2017.7.20.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.