Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition
文献类型:会议论文
作者 | Zheng Lian1,3![]() ![]() ![]() ![]() ![]() |
出版日期 | 2020 |
会议日期 | 25-29 October, 2020 |
会议地点 | Shanghai, China |
英文摘要 | Emotion recognition remains a complex task due to speaker variations and low-resource training samples. To address these difficulties, we focus on the domain adversarial neural networks (DANN) for emotion recognition. The primary task is to predict emotion labels. The secondary task is to learn a common representation where speaker identities can not be distinguished. By using this approach, we bring the representations of different speakers closer. Meanwhile, through using the unlabeled data in the training process, we alleviate the impact of lowresource training samples. In the meantime, prior work found that contextual information and multimodal features are important for emotion recognition. However, previous DANN based approaches ignore these information, thus limiting their performance. In this paper, we propose the context-dependent domain adversarial neural network for multimodal emotion recognition. |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/44722] ![]() |
专题 | 模式识别国家重点实验室_智能交互 |
作者单位 | 1.National Laboratory of Pattern Recognition, CASIA, Beijing 2.Huawei Technologies Co., LTD., Beijing 3.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 4.CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing |
推荐引用方式 GB/T 7714 | Zheng Lian,Jianhua Tao,Bin Liu,et al. Context-Dependent Domain Adversarial Neural Network for Multimodal Emotion Recognition[C]. 见:. Shanghai, China. 25-29 October, 2020. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。