中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images

文献类型:会议论文

作者Tianxiang Ma1,3; Bingchuan Li4; Qian He4; Jing Dong3; Tieniu Tan2,3
出版日期2023
会议日期10.2-10.6
会议地点法国巴黎
英文摘要

While current face animation methods can manipulate expressions individually, they suffer from several limitations. The expressions manipulated by some motion-based facial reenactment models are crude. Other ideas modeled with facial action units cannot generalize to arbitrary expressions not covered by annotations. In this paper, we introduce a novel Geometry-aware Facial Expression Translation (GaFET) framework, which is based on parametric 3D facial representations and can stably decoupled expression. Among them, a Multi-level Feature Aligned Transformer is proposed to complement non-geometric facial detail features while addressing the alignment challenge of spatial features. Further, we design a De-expression model based on StyleGAN, in order to reduce the learning difficulty of GaFET in unpaired “in-the-wild” images. Extensive qualitative and quantitative experiments demonstrate that we achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures. Besides, videos or annotated training data are omitted, making our method easier to use and generalize.

源URL[http://ir.ia.ac.cn/handle/173211/56658]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Jing Dong
作者单位1.School of Artificial Intelligence, UCAS
2.Nanjing University
3.CRIPAC & NLPR, CASIA
4.ByteDance Ltd, Beijing, China
推荐引用方式
GB/T 7714
Tianxiang Ma,Bingchuan Li,Qian He,et al. GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images[C]. 见:. 法国巴黎. 10.2-10.6.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。