中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
SMIN: Semi-Supervised Multi-Modal Interaction Network for Conversational Emotion Recognition

文献类型:期刊论文

作者Lian, Zheng1,4; Liu, Bin4; Tao, Jianhua2,3,4
刊名IEEE TRANSACTIONS ON AFFECTIVE COMPUTING
出版日期2023-07-01
卷号14期号:3页码:2415-2429
ISSN号1949-3045
关键词Emotion recognition Feature extraction Training Acoustics Semisupervised learning Benchmark testing Hidden Markov models Semi-supervised multi-modal interaction network (SMIN) conversational emotion recognition semi-supervised learning intra-modal interaction cross-modal interaction
DOI10.1109/TAFFC.2022.3141237
通讯作者Liu, Bin(liubin@nlpr.ia.ac.cn) ; Tao, Jianhua(jhtao@nlpr.ia.ac.cn)
英文摘要Conversational emotion recognition is a crucial research topic in human-computer interactions. Due to the heavy annotation cost and inevitable label ambiguity, collecting large amounts of labeled data is challenging and expensive, which restricts the performance of current fully-supervised methods in this domain. To address this problem, researchers attempt to distill knowledge from unlabeled data via semi-supervised learning. However, most of these semi-supervised methods ignore multimodal interactive information, although recent works have proven that such interactive information is essential for emotion recognition. To this end, we propose a novel framework to seamlessly integrate semi-supervised learning with multimodal interactions, called "Semi-supervised Multi-modal Interaction Network (SMIN)". SMIN contains two well-designed semi-supervised modules, "Intra-modal Interactive Module (IIM)" and "Cross-modal Interactive Module (CIM)" to learn intra- and cross-modal interactions. These two modules leverage additional unlabeled data to extract emotion-salient representations. To capture additional contextual information, we utilize the hierarchical recurrent networks followed with the hybrid fusion strategy to integrate multimodal features. These multimodal features are further utilized for conversational emotion recognition. Experimental results on four benchmark datasets (i.e., IEMOCAP, MELD, CMU-MOSI and CMU-MOSEI) demonstrate that SMIN succeeds over existing state-of-the-art strategies on emotion recognition.
WOS关键词SENTIMENT ANALYSIS ; SPEECH ; FUSION
资助项目National Key Research and Development Plan of China ; National Natural Science Foundation of China (NSFC)[2017YFC0820602] ; National Natural Science Foundation of China (NSFC)[61831022] ; National Natural Science Foundation of China (NSFC)[61771472] ; National Natural Science Foundation of China (NSFC)[61901473] ; [61773379]
WOS研究方向Computer Science
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:001075041900053
资助机构National Key Research and Development Plan of China ; National Natural Science Foundation of China (NSFC)
源URL[http://ir.ia.ac.cn/handle/173211/52971]  
专题多模态人工智能系统全国重点实验室
通讯作者Liu, Bin; Tao, Jianhua
作者单位1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
3.CAS Ctr Excellence Brain Sci & Intelligence Techno, Beijing 100190, Peoples R China
4.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Lian, Zheng,Liu, Bin,Tao, Jianhua. SMIN: Semi-Supervised Multi-Modal Interaction Network for Conversational Emotion Recognition[J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING,2023,14(3):2415-2429.
APA Lian, Zheng,Liu, Bin,&Tao, Jianhua.(2023).SMIN: Semi-Supervised Multi-Modal Interaction Network for Conversational Emotion Recognition.IEEE TRANSACTIONS ON AFFECTIVE COMPUTING,14(3),2415-2429.
MLA Lian, Zheng,et al."SMIN: Semi-Supervised Multi-Modal Interaction Network for Conversational Emotion Recognition".IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 14.3(2023):2415-2429.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。