中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Contrastive and Consistent Learning for Unsupervised Human Parsing

文献类型:会议论文

作者Xiaomei Zhang2; Feng Pan1; Ke Xiang1; Xiangyu Zhu2; Chang Yu2; Zidu Wang2; Zhen Lei2
出版日期2022
会议日期2022.10.27-28
会议地点Beijing
关键词Unsupervised Human Parsing
英文摘要

How to learn pixel-level representations of human parts without
supervision is a challenging task. However, despite its significance, a
few works explore this challenge. In this work, we propose a contrastive
and consistent learning network (C2L) for unsupervised human parsing.
C2L mainly consists of a part contrastive module and a pixel consistent
module. We design a part contrastive module to distinguish the
same semantic human parts from other ones by contrastive learning,
which pulls the same semantic parts closer and pushes different semantic
ones away. A pixel consistent module is proposed to obtain spatial correspondence in each view of images, which can select semantic-relevant image pixels and suppress semantic-irrelevant ones. To improve the pattern analysis ability, we perform a sparse operation on the feed-forward networks of the pixel consistent module. Extensive experiments on the popular human parsing benchmark show that our method achieves competitive performance.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/57135]  
专题自动化研究所_模式识别国家重点实验室_生物识别与安全技术研究中心
通讯作者Xiangyu Zhu
作者单位1.sunnyoptical.,ltd
2.National Lab of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Xiaomei Zhang,Feng Pan,Ke Xiang,et al. Contrastive and Consistent Learning for Unsupervised Human Parsing[C]. 见:. Beijing. 2022.10.27-28.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。