中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition

文献类型:会议论文

作者Licai Sun3,4; Zheng Lian4; Bin Liu3,4; Jianhua Tao1,2
出版日期2023
会议日期October 29-November 3, 2023
会议地点Ottawa, ON, Canada
英文摘要

Dynamic facial expression recognition (DFER) is essential to the development of intelligent and empathetic machines. Prior efforts in this field mainly fall into supervised learning paradigm, which is severely restricted by the limited labeled data in existing datasets. Inspired by recent unprecedented success of masked autoencoders (e.g., VideoMAE), this paper proposes MAE-DFER, a novel selfsupervised method which leverages large-scale self-supervised pretraining on abundant unlabeled data to largely advance the development of DFER. Since the vanilla Vision Transformer (ViT) employed in VideoMAE requires substantial computation during fine-tuning, MAE-DFER develops an efficient local-global interaction Transformer (LGI-Former) as the encoder. Moreover, in addition to the standalone appearance content reconstruction in VideoMAE, MAEDFER also introduces explicit temporal facial motion modeling to encourage LGI-Former to excavate both static appearance and dynamic motion information. Extensive experiments on six datasets show that MAE-DFER consistently outperforms state-of-the-art supervised methods by significant margins (e.g., +6.30% UAR on DFEW and +8.34% UAR on MAFW), verifying that it can learn powerful dynamic facial representations via large-scale self-supervised pre-training. Besides, it has comparable or even better performance than VideoMAE, while largely reducing the computational cost (about 38% FLOPs). We believe MAE-DFER has paved a new way for the advancement of DFER and can inspire more relevant research in this field and even other related tasks. Codes and models are publicly available at https://github.com/sunlicai/MAE-DFER.

源URL[http://ir.ia.ac.cn/handle/173211/57087]  
专题多模态人工智能系统全国重点实验室
作者单位1.Beijing National Research Center for Information Science and Technology, Tsinghua University
2.Department of Automation, Tsinghua University
3.School of Artificial Intelligence, University of Chinese Academy of Sciences
4.Institute of Automation, Chinese Academy of Sciences Beijing, China
推荐引用方式
GB/T 7714
Licai Sun,Zheng Lian,Bin Liu,et al. MAE-DFER: Efficient Masked Autoencoder for Self-supervised Dynamic Facial Expression Recognition[C]. 见:. Ottawa, ON, Canada. October 29-November 3, 2023.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。