中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features

文献类型:期刊论文

作者Du CD(杜长德)2; Fu KC(付铠成)2; Li JP(李劲鹏)1; He HG(何晖光)2; He Huiguang(何晖光); Li Jinpeng(李劲鹏); Du Changde(杜长德); Fu Kaicheng(付铠城)
刊名IEEE Transactions on Pattern Analysis and Machine Intelligence
出版日期2023
页码1-17
英文摘要

Decoding human visual neural representations is a challenging task with great scientific significance in revealing vision-processing mechanisms and developing brain-like intelligent machines. Most existing methods are difficult to generalize to novel categories that have no corresponding neural data for training. The two main reasons are 1) the under-exploitation of the multimodal semantic knowledge underlying the neural data and 2) the small number of paired ( stimuli-responses ) training data. To overcome these limitations, this paper presents a generic neural decoding method called BraVL that uses multimodal learning of brain-visual-linguistic features. We focus on modeling the relationships between brain, visual and linguistic features via multimodal deep generative models. Specifically, we leverage the mixture-of-product-of-experts formulation to infer a latent code that enables a coherent joint generation of all three modalities. To learn a more consistent joint representation and improve the data efficiency in the case of limited brain activity data, we exploit both intra- and inter-modality mutual information maximization regularization terms. In particular, our BraVL model can be trained under various semi-supervised scenarios to incorporate the visual and textual features obtained from the extra categories. Finally, we construct three trimodal matching datasets, and the extensive experiments lead to some interesting conclusions and cognitive insights: 1) decoding novel visual categories from human brain activity is practically possible with good accuracy; 2) decoding models using the combination of visual and linguistic features perform much better than those using either of them alone; 3) visual perception may be accompanied by linguistic influences to represent the semantics of visual stimuli.

URL标识查看原文
语种英语
源URL[http://ir.ia.ac.cn/handle/173211/51626]  
专题类脑智能研究中心_神经计算及脑机交互
通讯作者He HG(何晖光); He Huiguang(何晖光)
作者单位1.Ningbo HwaMei Hospital, UCAS
2.Institute of Automation,Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Du CD,Fu KC,Li JP,et al. Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2023:1-17.
APA Du CD.,Fu KC.,Li JP.,He HG.,He Huiguang.,...&Fu Kaicheng.(2023).Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features.IEEE Transactions on Pattern Analysis and Machine Intelligence,1-17.
MLA Du CD,et al."Decoding Visual Neural Representations by Multimodal Learning of Brain-Visual-Linguistic Features".IEEE Transactions on Pattern Analysis and Machine Intelligence (2023):1-17.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。