中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation

文献类型:会议论文

作者Tianxiang Ma1,2; Bo Peng1,3; Wei Wang1; Jing Dong1
出版日期2021
会议日期6.19-6.25
会议地点线上会议
英文摘要

Pose-guided person image generation usually involves using paired source-target images to supervise the training, which significantly increases the data preparation effort and limits the application of the models. To deal with this problem, we propose a novel multi-level statistics transfer model, which disentangles and transfers multi-level appearance features from person images and merges them with pose features to reconstruct the source person images themselves. So that the source images can be used as supervision for self-driven person image generation. Specifically, our model extracts multi-level features from the appearance encoder and learns the optimal appearance representation through attention mechanism and attributes statistics. Then we transfer them to a pose-guided generator for re-fusion of appearance and pose. Our approach allows for flexible manipulation of person appearance and pose properties to perform pose transfer and clothes style transfer tasks. Experimental results on the DeepFashion dataset demonstrate our method’s superiority compared with state-of-the-art supervised and unsupervised methods. In addition, our approach also performs well in the wild.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/56656]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Jing Dong
作者单位1.Center for Research on Intelligent Perception and Computing, CASIA
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.State Key Laboratory of Information Security, IIE, CAS
推荐引用方式
GB/T 7714
Tianxiang Ma,Bo Peng,Wei Wang,et al. MUST-GAN: Multi-level Statistics Transfer for Self-driven Person Image Generation[C]. 见:. 线上会议. 6.19-6.25.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。