中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Joint Visual Context for Pedestrian Captioning

文献类型:会议论文

作者Quan Liu1,2,3; Sijiong Zhang1,2,3
出版日期2018
会议名称9th International Conference, ICIMCS 2017
会议日期2017-8-23
会议地点Qingdao, China
关键词Image captioning Pedestrian description
英文摘要
Image captioning is a fundamental task connecting computer vision and natural language processing. Recent researches usually concentrate on generic image captioning or video captioning among thousands of classes. However, they can not effectively deal with a specific class of objects, such as pedestrian. Pedestrian captioning is critical for analysis, identification and retrieval in massive collections of data.

Therefore, in this paper, we propose a novel approach for pedestrian captioning with joint visual context. Firstly, a deep convolutional neural network (CNN) is employed to obtain the global attributes of a pedestrian (e.g., gender, age, and actions), and a Faster R-CNN is utilized to detect the local parts of interest for identification of the local attributes of a pedestrian (e.g., cloth type, color type, and the belongings).

Then, we splice the global and local attributes into a fixed length vector and input it into a Long-Short Term Memory network (LSTM) to generate descriptions. Finally, a dataset of 5000 pedestrian images is collected to evaluate the performance of pedestrian captioning.

Experimental results show the superiority of the proposed approach.
会议录Internet Multimedia Computing and Service((CCIS, volume 819)
会议录出版者Springer
学科主题天文技术与方法
会议录出版地Switzerland
语种英语
源URL[http://ir.niaot.ac.cn/handle/114a32/1534]  
专题会议论文
作者单位1.南京天文光学技术研究所
2.天文光学技术重点实验室
3.中国科学院大学
推荐引用方式
GB/T 7714
Quan Liu,Sijiong Zhang. Joint Visual Context for Pedestrian Captioning[C]. 见:9th International Conference, ICIMCS 2017. Qingdao, China. 2017-8-23.

入库方式: OAI收割

来源:南京天文光学技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。