GLAD: Global-Local-Alignment Descriptor for Scalable Person Re-Identification
文献类型:期刊论文
作者 | Wei, Longhui1; Zhang, Shiliang1; Yao, Hantao2![]() |
刊名 | IEEE TRANSACTIONS ON MULTIMEDIA
![]() |
出版日期 | 2019-04-01 |
卷号 | 21期号:4页码:986-999 |
关键词 | Person re-identification (Re-ID) global-local-alignment descriptor retrieval framework |
ISSN号 | 1520-9210 |
DOI | 10.1109/TMM.2018.2870522 |
通讯作者 | Zhang, Shiliang(slzhang.jdl@pku.edu.cn) |
英文摘要 | The huge variance of human pose and the misalignment of detected human images significantly increase the difficulty of pedestrian image matching in person Re-Identification (Re-ID). Moreover, the massive visual data being produced by surveillance video cameras requires highly efficient person Re-ID systems. Targeting to solve the first problem, thiswork proposes a robust and discriminative pedestrian image descriptor, namely, the Global-Local-Alignment Descriptor (GLAD). For the second problem, this work treats person Re-ID as image retrieval and proposes an efficient indexing and retrieval framework. GLAD explicitly leverages the local and global cues in the human body to generate a discriminative and robust representation. It consists of part extraction and descriptor learning modules, where several part regions are first detected and then deep neural networks are designed for representation learning on both the local and global regions. A hierarchical indexing and retrieval framework is designed to perform offline relevance mining to eliminate the huge person ID redundancy in the gallery set, and accelerate the online Re-ID procedure. Extensive experimental results on widely used public benchmark datasets show GLAD achieves competitive accuracy compared to the state-of-the-art methods. On a large-scale person, with the Re-ID dataset containing more than 520 K images, our retrieval framework significantly accelerates the online Re-ID procedure while also improving Re-ID accuracy. Therefore, this work has the potential to work better on person Re-ID tasks in real scenarios. |
WOS关键词 | IDENTIFICATION ; PERFORMANCE |
资助项目 | NVIDIA NVAIL program |
WOS研究方向 | Computer Science ; Telecommunications |
语种 | 英语 |
WOS记录号 | WOS:000462413700014 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
资助机构 | NVIDIA NVAIL program |
源URL | [http://ir.ia.ac.cn/handle/173211/23498] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队 |
通讯作者 | Zhang, Shiliang |
作者单位 | 1.Peking Univ, Sch Elect Engn & Comp Sci, Beijing 100871, Peoples R China 2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China 3.Huawei, Noahs Ark Lab, Shenzhen 518129, Peoples R China 4.Univ Texas San Antonio, Dept Comp Sci, San Antonio, TX 78249 USA |
推荐引用方式 GB/T 7714 | Wei, Longhui,Zhang, Shiliang,Yao, Hantao,et al. GLAD: Global-Local-Alignment Descriptor for Scalable Person Re-Identification[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2019,21(4):986-999. |
APA | Wei, Longhui,Zhang, Shiliang,Yao, Hantao,Gao, Wen,&Tian, Qi.(2019).GLAD: Global-Local-Alignment Descriptor for Scalable Person Re-Identification.IEEE TRANSACTIONS ON MULTIMEDIA,21(4),986-999. |
MLA | Wei, Longhui,et al."GLAD: Global-Local-Alignment Descriptor for Scalable Person Re-Identification".IEEE TRANSACTIONS ON MULTIMEDIA 21.4(2019):986-999. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。