|Author||Huang Yan(黄岩)1,2; Wang We(王威)1,2; Wang Liang(王亮)1,2|
Effective image and sentence matching depends on how
to well measure their global visual-semantic similarity.
Based on the observation that such a global similarity arises
from a complex aggregation of multiple local similarities
between pairwise instances of image (objects) and sentence
(words), we propose a selective multimodal Long Short-
Term Memory network (sm-LSTM) for instance-aware image
and sentence matching. The sm-LSTM includes a multimodal
context-modulated attention scheme at each timestep
that can selectively attend to a pair of instances of image
and sentence, by predicting pairwise instance-aware
saliency maps for image and sentence. For selected pairwise
instances, their representations are obtained based on
the predicted saliency maps, and then compared to measure
their local similarity. By similarly measuring multiple local
similarities within a few timesteps, the sm-LSTM sequentially
aggregates them with hidden states to obtain a final
matching score as the desired global similarity. Extensive
experiments show that our model can well match image and
sentence with complex content, and achieve the state-of-theart
results on two public benchmark datasets.
|Huang Yan,Wang We,Wang Liang. Instance-aware Image and Sentence Matching with Selective Multimodal LSTM[C]. 见:. USA. 2017.7.20.|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.