Discriminative feature learning based on multi-view attention network with diffusion joint loss for speech emotion recognition
文献类型:期刊论文
作者 | Yang Liu1![]() |
刊名 | Engineering Applications of Artificial Intelligence
![]() |
出版日期 | 2024 |
卷号 | 137页码:15 |
通讯作者邮箱 | zzqust@126.com (zhao, zhen) |
关键词 | Speech emotion recognition Multi-view attention network Diffusion joint loss |
DOI | 10.1016/j.engappai.2024.109219 |
英文摘要 | In speech emotion recognition, existing models often struggle to accurately classify emotions with high similarity. In this paper, we propose a novel architecture that integrates a multi-view attention network (MVAN) and diffusion joint loss to alleviate confusion by placing a stronger focus on emotions that are challenging to classify accurately. First, we use logarithmic Mel-spectrograms (log-Mels), deltas, and delta- deltas of log-Mels as three-dimensional features to minimize external interference. Then, we design the MVAN to extract effective multi-time scale emotion features, where the channel and spatial attention are used to selectively localize the regions in the input features related to the target emotion. A Multi-time view bidirectional long and short-term memory network is used to extract the shallow edge features and deep semantic features, and multi-scale self-attention fuses these features through cross-scale attention fusion to obtain multi-time scale emotion features. Finally, a diffusion joint loss strategy is introduced to distinguish the emotional embeddings with high similarity by the generated complex emotion triplets in a diffusing fashion. We evaluated our proposed method on the Interactive Emotional Mood Binary Motion Capture (IEMOCAP), Chinese Academy of Sciences Automation Institute of Automation (CASIA), and Berlin German Emotion Speech Bank (EMODB) corpus. The results show significant improvements over existing methods, achieving 86.87% WA, 86.60% UA, and 86.82% WF1 on IEMOCAP; 70.74% WA, 70.74% UA, and 70.25% WF1 on CASIA; and 93.65% WA, 91.13% UA, and 92.26% WF1 on EMODB. These results confirm the superiority of our method. |
收录类别 | SCI ; EI |
语种 | 英语 |
源URL | [http://ir.psych.ac.cn/handle/311026/48745] ![]() |
专题 | 心理研究所_中国科学院行为科学重点实验室 |
作者单位 | 1.School of Information Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, China 2.School of Computer Science and Software, TianGong University, Tianjing 300387, China 3.CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100089, China |
推荐引用方式 GB/T 7714 | Yang Liu,Xin Chen,Yuan Song,et al. Discriminative feature learning based on multi-view attention network with diffusion joint loss for speech emotion recognition[J]. Engineering Applications of Artificial Intelligence,2024,137:15. |
APA | Yang Liu.,Xin Chen.,Yuan Song.,Yarong Li.,Shengbei Wang.,...&Zhen Zhao.(2024).Discriminative feature learning based on multi-view attention network with diffusion joint loss for speech emotion recognition.Engineering Applications of Artificial Intelligence,137,15. |
MLA | Yang Liu,et al."Discriminative feature learning based on multi-view attention network with diffusion joint loss for speech emotion recognition".Engineering Applications of Artificial Intelligence 137(2024):15. |
入库方式: OAI收割
来源:心理研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。