中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Improved Video Emotion Recognition with Alignment of CNN and Human Brain Representations

文献类型:期刊论文

作者Fu, Kaicheng1,2; Du, Changde1; Wang, Shengpei1; He, Huiguang1,2
刊名IEEE Transactions on Affective Computing
出版日期2023-09-18
页码1-15
关键词CNN-brain Alignment Brain-guided Deep Learning Video Emotion Recognition Representation Similarity Analysis
ISSN号1949-3045
DOI10.1109/TAFFC.2023.3316173
英文摘要

The ability to perceive emotions is an important criterion for judging whether a machine is intelligent. To this end, a large number of emotion recognition algorithms have been developed especially for visual information such as video. Most previous studies are based on hand-crafted features or CNN, in which the former fails to extract expressive features and the latter still faces the undesired affective gap. This drives us to think about what if we could incorporate the human emotional perception capability into CNN. In this paper, we attempt to address this question by exploring alignment between representations of neural networks and human brain activity. In particular, we employ a visually evoked emotional brain activity dataset to conduct a jointly training strategy for CNN. In the training phase, we introduce the representation similarity analysis (RSA) to align the CNN with human brain to obtain more brain-like features. Specifically, representation similarity matrices (RSMs) of multiple convolutional layers are averaged with learnable weights and related to the RSM of human brain. In order to obtain emotion-related brain activity, we conduct voxel selection and denoising with a banded ridge model before computing the RSM. Sufficient experiments on two challenging video emotion recognition datasets and multiple popular CNN architectures suggest that human brain activity is promising to provide an inductive bias for CNN towards better performance of emotion recognition. Our source code is available in https://osf.io/ucx57.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/56583]  
专题类脑智能研究中心_神经计算及脑机交互
作者单位1.Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
推荐引用方式
GB/T 7714
Fu, Kaicheng,Du, Changde,Wang, Shengpei,et al. Improved Video Emotion Recognition with Alignment of CNN and Human Brain Representations[J]. IEEE Transactions on Affective Computing,2023:1-15.
APA Fu, Kaicheng,Du, Changde,Wang, Shengpei,&He, Huiguang.(2023).Improved Video Emotion Recognition with Alignment of CNN and Human Brain Representations.IEEE Transactions on Affective Computing,1-15.
MLA Fu, Kaicheng,et al."Improved Video Emotion Recognition with Alignment of CNN and Human Brain Representations".IEEE Transactions on Affective Computing (2023):1-15.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。