Learning Gait Representation From Massive Unlabelled Walking Videos: A Benchmark
文献类型:期刊论文
作者 | Fan, Chao5; Hou, Saihui3,4; Wang, Jilong1,2; Huang, Yongzhen3,4; Yu, Shiqi5 |
刊名 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE |
出版日期 | 2023-12-01 |
卷号 | 45期号:12页码:14920-14937 |
ISSN号 | 0162-8828 |
关键词 | Gait recognition self-supervised contrastive learning GaitSSB GaitLU-1M |
DOI | 10.1109/TPAMI.2023.3312419 |
通讯作者 | Yu, Shiqi(yusq@sustech.edu.cnw) |
英文摘要 | Gait depicts individuals' unique and distinguishing walking patterns and has become one of the most promising biometric features for human identification. As a fine-grained recognition task, gait recognition is easily affected by many factors and usually requires a large amount of completely annotated data that is costly and insatiable. This paper proposes a large-scale self-supervised benchmark for gait recognition with contrastive learning, aiming to learn the general gait representation from massive unlabelled walking videos for practical applications via offering informative walking priors and diverse real-world variations. Specifically, we collect a large-scale unlabelled gait dataset GaitLU-1M consisting of 1.02M walking sequences and propose a conceptually simple yet empirically powerful baseline model GaitSSB. Experimentally, we evaluate the pre-trained model on four widely-used gait benchmarks, CASIA-B, OU-MVLP, GREW and Gait3D with or without transfer learning. The unsupervised results are comparable to or even better than the early model-based and GEI-based methods. After transfer learning, GaitSSB outperforms existing methods by a large margin in most cases, and also showcases the superior generalization capacity. Further experiments indicate that the pre-training can save about 50% and 80% annotation costs of GREW and Gait3D. Theoretically, we discuss the critical issues for gait-specific contrastive framework and present some insights for further study. As far as we know, GaitLU-1M is the first large-scale unlabelled gait dataset, and GaitSSB is the first method that achieves remarkable unsupervised results on the aforementioned benchmarks. |
WOS关键词 | RECOGNITION ; MODEL |
资助项目 | National Natural Science Foundation ofChina[61976144] ; National Natural Science Foundation ofChina[62276025] ; National Natural Science Foundation ofChina[62206022] ; National Key Research and Development Program of China[2020AAA0140002] ; Shenzhen International Research Cooperation Project[GJHZ20220913142611021] ; Shenzhen Technology Plan Program[KQTD20170331093217368] |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
出版者 | IEEE COMPUTER SOC |
WOS记录号 | WOS:001130146400054 |
资助机构 | National Natural Science Foundation ofChina ; National Key Research and Development Program of China ; Shenzhen International Research Cooperation Project ; Shenzhen Technology Plan Program |
源URL | [http://ir.ia.ac.cn/handle/173211/55509] |
专题 | 自动化研究所_智能感知与计算研究中心 |
通讯作者 | Yu, Shiqi |
作者单位 | 1.Chinese Acad Sci CASIA, Inst Automat, Ctr Res Intelligent Percept & Comp CRIPAC, Natl Lab Pattern Recognit NLPR, Beijing 100045, Peoples R China 2.Univ Sci & Technol China, Hefei 230052, Anhui, Peoples R China 3.Watrix Technol Ltd Co Ltd, Beijing 100088, Peoples R China 4.Beijing Normal Univ, Sch Artificial Intelligence, Beijing 100875, Peoples R China 5.Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Guangdong, Peoples R China |
推荐引用方式 GB/T 7714 | Fan, Chao,Hou, Saihui,Wang, Jilong,et al. Learning Gait Representation From Massive Unlabelled Walking Videos: A Benchmark[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2023,45(12):14920-14937. |
APA | Fan, Chao,Hou, Saihui,Wang, Jilong,Huang, Yongzhen,&Yu, Shiqi.(2023).Learning Gait Representation From Massive Unlabelled Walking Videos: A Benchmark.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,45(12),14920-14937. |
MLA | Fan, Chao,et al."Learning Gait Representation From Massive Unlabelled Walking Videos: A Benchmark".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 45.12(2023):14920-14937. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。