中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
ViP-CNN: Visual Phrase Guided Convolutional Neural Network

文献类型:会议论文

作者Yikang Li; Wanli Ouyang; Xiaogang Wang; Xiaoou Tang
出版日期2017
会议地点美国
英文摘要As the intermediate level task connecting image cap- tioning and object detection, visual relationship detection started to catch researchers’ attention because of its de- scriptive power and clear structure. It detects the objects and captures their pair-wise interactions with a subject- predicate-object triplet, e.g. hperson-ride-horsei. In this paper, each visual relationship is considered as a phrase with three components. We formulate the visual relationship detection as three inter-connected recognition problems and propose a Visual Phrase guided Convolutional Neural Net- work (ViP-CNN) to address them simultaneously. In ViP- CNN, we present a Phrase-guided Message Passing Struc- ture (PMPS) to establish the connection among relationship components and help the model consider the three problems jointly. Corresponding non-maximum suppression method and model training strategy are also proposed. Experimen- tal results show that our ViP-CNN outperforms the state- of-art method both in speed and accuracy. We further pre- train ViP-CNN on our cleansed Visual Genome Relation- ship dataset, which is found to perform better than the pre- training on the ImageNet for this task.
语种英语
源URL[http://ir.siat.ac.cn:8080/handle/172644/11767]  
专题深圳先进技术研究院_集成所
作者单位2017
推荐引用方式
GB/T 7714
Yikang Li,Wanli Ouyang,Xiaogang Wang,et al. ViP-CNN: Visual Phrase Guided Convolutional Neural Network[C]. 见:. 美国.

入库方式: OAI收割

来源:深圳先进技术研究院

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。