中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
A novel transformer autoencoder for multi-modal emotion recognition with incomplete data

文献类型:期刊论文

作者Cheng, Cheng4; Liu, Wenzhe3; Fan, Zhaoxin2; Feng, Lin4; Jia, Ziyu1
刊名NEURAL NETWORKS
出版日期2024-04-01
卷号172页码:12
ISSN号0893-6080
关键词Multi-modal signals Emotion recognition Incomplete data Transformer autoencoder Convolutional encoder
DOI10.1016/j.neunet.2024.106111
通讯作者Feng, Lin(fenglin@dlut.edu.cn)
英文摘要Multi -modal signals have become essential data for emotion recognition since they can represent emotions more comprehensively. However, in real -world environments, it is often impossible to acquire complete data on multi -modal signals, and the problem of missing modalities causes severe performance degradation in emotion recognition. Therefore, this paper represents the first attempt to use a transformer -based architecture, aiming to fill the modality -incomplete data from partially observed data for multi -modal emotion recognition (MER). Concretely, this paper proposes a novel unified model called transformer autoencoder (TAE), comprising a modality -specific hybrid transformer encoder, an inter -modality transformer encoder, and a convolutional decoder. The modality -specific hybrid transformer encoder bridges a convolutional encoder and a transformer encoder, allowing the encoder to learn local and global context information within each particular modality. The inter -modality transformer encoder builds and aligns global cross -modal correlations and models longrange contextual information with different modalities. The convolutional decoder decodes the encoding features to produce more precise recognition. Besides, a regularization term is introduced into the convolutional decoder to force the decoder to fully leverage the complete and incomplete data for emotional recognition of missing data. 96.33%, 95.64%, and 92.69% accuracies are attained on the available data of the DEAP and SEED -IV datasets, and 93.25%, 92.23%, and 81.76% accuracies are obtained on the missing data. Particularly, the model acquires a 5.61% advantage with 70% missing data, demonstrating that the model outperforms some state-of-the-art approaches in incomplete multi -modal learning.
资助项目Fundamental Research Funds for the Central Universities, China[DUT19RC (3) 012] ; National Natural Science Foundation of China[62306317] ; China Postdoctoral Science Foundation, China[GZC20232992] ; China Postdoctoral Science Foundation, China[2023M733738]
WOS研究方向Computer Science ; Neurosciences & Neurology
语种英语
出版者PERGAMON-ELSEVIER SCIENCE LTD
WOS记录号WOS:001163939200001
资助机构Fundamental Research Funds for the Central Universities, China ; National Natural Science Foundation of China ; China Postdoctoral Science Foundation, China
源URL[http://ir.ia.ac.cn/handle/173211/55673]  
专题脑图谱与类脑智能实验室
通讯作者Feng, Lin
作者单位1.Univ Chinese Acad Sci, Inst Automat, Chinese Acad Sci, Brainnetome Ctr, Beijing, Peoples R China
2.Renmin Univ China, Psyche AI Inc, Beijing, Peoples R China
3.Huzhou Univ, Sch Informat Engn, Huzhou, Peoples R China
4.Dalian Univ Technol, Dept Comp Sci & Technol, Dalian, Peoples R China
推荐引用方式
GB/T 7714
Cheng, Cheng,Liu, Wenzhe,Fan, Zhaoxin,et al. A novel transformer autoencoder for multi-modal emotion recognition with incomplete data[J]. NEURAL NETWORKS,2024,172:12.
APA Cheng, Cheng,Liu, Wenzhe,Fan, Zhaoxin,Feng, Lin,&Jia, Ziyu.(2024).A novel transformer autoencoder for multi-modal emotion recognition with incomplete data.NEURAL NETWORKS,172,12.
MLA Cheng, Cheng,et al."A novel transformer autoencoder for multi-modal emotion recognition with incomplete data".NEURAL NETWORKS 172(2024):12.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。