Learning Coarse-to-fine Structured Feature Embedding for Vehicle Re-identification
文献类型:会议论文
作者 | Guo Haiyun1,2![]() ![]() ![]() ![]() ![]() |
出版日期 | 2018 |
会议日期 | 2018年2月2号-7号 |
会议地点 | New Orleans, Louisiana, USA |
英文摘要 | Vehicle re-identification (re-ID) is to identify the same vehicle across different cameras. It’s a significant but challenging topic, which has received little attention due to the complex intra-class and inter-class variation of vehicle images and the lack of large-scale vehicle re-ID dataset. Previous methods focus on pulling images from different vehicles apart but neglect the discrimination between vehicles from different vehicle models, which is actually quite important to obtain a correct ranking order for vehicle re-ID. In this paper, we learn a structured feature embedding for vehicle re-ID with a novel coarse-to-fine ranking loss to pull images of the same vehicle as close as possible and achieve discrimination between images from different vehicles as well as vehicles from different vehicle models. In the learnt feature space, both intra-class compactness and inter-class distinction are well guaranteed and the Euclidean distance between features directly reflects the semantic similarity of vehicle images. Furthermore, we build so far the largest vehicle re-ID dataset “Vehicle-1M”1 which involves nearly 1 million images captured in various surveillance scenarios. Experimental results on “Vehicle-1M”and “VehicleID” demonstrate the superiority of our proposed approach. |
源URL | [http://ir.ia.ac.cn/handle/173211/20902] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_图像与视频分析团队 |
通讯作者 | Zhao Chaoyang |
作者单位 | 1.Univ Chinese Acad Sci, Beijing 100190, Peoples R China 2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Guo Haiyun,Zhao Chaoyang,Liu Zhiwei,et al. Learning Coarse-to-fine Structured Feature Embedding for Vehicle Re-identification[C]. 见:. New Orleans, Louisiana, USA. 2018年2月2号-7号. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。