GaitGANv2: Invariant gait feature extraction using generative adversarial networks
文献类型:期刊论文
作者 | Yu, Shiqi1; Liao, Rijun1; An, Weizhi1; Chen, Haifeng1; Garcia, Edel B.2; Huang, Yongzhen3,4![]() |
刊名 | PATTERN RECOGNITION
![]() |
出版日期 | 2019-03-01 |
卷号 | 87页码:179-189 |
关键词 | Gait recognition Generative adversarial networks Invariant feature |
ISSN号 | 0031-3203 |
DOI | 10.1016/j.patcog.2018.10.019 |
通讯作者 | Yu, Shiqi(shiqi.yu@szu.edu.cn) |
英文摘要 | The performance of gait recognition can be adversely affected by many sources of variation such as view angle, clothing, presence of and type of bag, posture, and occlusion, among others. To extract invariant gait features, we proposed a method called GaitGANv2 which is based on generative adversarial networks (GAN). In the proposed method, a GAN model is taken as a regressor to generate a canonical side view of a walking gait in normal clothing without carrying any bag. A unique advantage of this approach is that, unlike other methods, GaitGANv2 does not need to determine the view angle before generating invariant gait images. Indeed, only one model is needed to account for all possible sources of variation such as with or without carrying accessories and varying degrees of view angle. The most important computational challenge, however, is to address how to retain useful identity information when generating the invariant gait images. To this end, our approach differs from the traditional GAN in that GaitGANv2 contains two discriminators instead of one. They are respectively called fake/real discriminator and identification discriminator. While the first discriminator ensures that the generated gait images are realistic, the second one maintains the human identity information. The proposed GaitGANv2 represents an improvement over GaitGANvl in that the former adopts a multi-loss strategy to optimize the network to increase the inter-class distance and to reduce the intra-class distance, at the same time. Experimental results show that GaitGANv2 can achieve state-of-the-art performance. (C) 2018 Elsevier Ltd. All rights reserved. |
WOS关键词 | RECOGNITION ; PROJECTION |
资助项目 | Science Foundation of Shenzhen[JCYJ20150324141711699] ; Science Foundation of Shenzhen[20170504160426188] |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:000453338200015 |
出版者 | ELSEVIER SCI LTD |
资助机构 | Science Foundation of Shenzhen |
源URL | [http://ir.ia.ac.cn/handle/173211/25661] ![]() |
专题 | 自动化研究所_智能感知与计算研究中心 |
通讯作者 | Yu, Shiqi |
作者单位 | 1.Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen, Peoples R China 2.Adv Technol Applicat Ctr CENATAV, Havana, Cuba 3.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China 4.Watrix Technol Ltd Co Ltd, Suzhou, Peoples R China 5.Trust Stamp, Atlanta, GA USA |
推荐引用方式 GB/T 7714 | Yu, Shiqi,Liao, Rijun,An, Weizhi,et al. GaitGANv2: Invariant gait feature extraction using generative adversarial networks[J]. PATTERN RECOGNITION,2019,87:179-189. |
APA | Yu, Shiqi.,Liao, Rijun.,An, Weizhi.,Chen, Haifeng.,Garcia, Edel B..,...&Poh, Norman.(2019).GaitGANv2: Invariant gait feature extraction using generative adversarial networks.PATTERN RECOGNITION,87,179-189. |
MLA | Yu, Shiqi,et al."GaitGANv2: Invariant gait feature extraction using generative adversarial networks".PATTERN RECOGNITION 87(2019):179-189. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。