|Author||Huang, Yan; Wu, Qi; Wang, Liang|
|Conference Place||Salt Lake City|
|Keyword||Image And Sentence Matching|
Image and sentence matching has made great progress recently, but it remains challenging due to the large visual semantic discrepancy. This mainly arises from that the representation of pixel-level image usually lacks of high-level semantic information as in its matched sentence. In this work, we propose a semantic-enhanced image and sentence matching model, which can improve the image representation by learning semantic concepts and then organizing them in a correct semantic order. Given an image, we first use a multi-regional multi-label CNN to predict its semantic concepts, including objects, properties, actions, etc. Then, considering that different orders of semantic concepts lead to diverse semantic meanings, we use a context-gated sentence generation scheme for semantic order learning. It simultaneously uses the image global context containing concept relations as reference and the groundtruth semantic order in the matched sentence as supervision. After obtaining the improved image representation, we learn the sentence representation with a conventional LSTM, and then jointly perform image and sentence matching and sentence generation for model learning. Extensive experiments demonstrate the effectiveness of our learned semantic concepts and order, by achieving the state-of-the-art results on two public benchmark datasets.
|Author of Source||Michael Brown|
|Huang, Yan,Wu, Qi,Wang, Liang. Learning Semantic Concepts and Order for Image and Sentence Matching[C]. 见:. Salt Lake City. 2018.6.18-2018.6.22.|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.