|
作者 | Xiaomei Zhang2 ; Feng Pan1; Ke Xiang1; Xiangyu Zhu2 ; Chang Yu2 ; Zidu Wang2; Zhen Lei2
|
出版日期 | 2022
|
会议日期 | 2022.10.27-28
|
会议地点 | Beijing
|
关键词 | Unsupervised Human Parsing
|
英文摘要 | How to learn pixel-level representations of human parts without
supervision is a challenging task. However, despite its significance, a
few works explore this challenge. In this work, we propose a contrastive
and consistent learning network (C2L) for unsupervised human parsing.
C2L mainly consists of a part contrastive module and a pixel consistent
module. We design a part contrastive module to distinguish the
same semantic human parts from other ones by contrastive learning,
which pulls the same semantic parts closer and pushes different semantic
ones away. A pixel consistent module is proposed to obtain spatial correspondence in each view of images, which can select semantic-relevant image pixels and suppress semantic-irrelevant ones. To improve the pattern analysis ability, we perform a sparse operation on the feed-forward networks of the pixel consistent module. Extensive experiments on the popular human parsing benchmark show that our method achieves competitive performance. |
语种 | 英语
|
源URL | [http://ir.ia.ac.cn/handle/173211/57135]  |
专题 | 自动化研究所_模式识别国家重点实验室_生物识别与安全技术研究中心
|
通讯作者 | Xiangyu Zhu |
作者单位 | 1.sunnyoptical.,ltd 2.National Lab of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
|
推荐引用方式 GB/T 7714 |
Xiaomei Zhang,Feng Pan,Ke Xiang,et al. Contrastive and Consistent Learning for Unsupervised Human Parsing[C]. 见:. Beijing. 2022.10.27-28.
|