中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Transformer-Based Spiking Neural Networks for Multimodal Audiovisual Classification

文献类型:期刊论文

作者Guo, Lingyue1; Gao, Zeyu1; Qu, Jinye2; Zheng, Suiwu1; Jiang, Runhao3; Lu, Yanfeng1; Qiao, Hong1
刊名IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS
出版日期2024-06-01
卷号16期号:3页码:1077-1086
关键词Neurons Visualization Task analysis Membrane potentials Transformers Biological system modeling Micromechanical devices Audiovisual classification audiovisual data sets multimodal recognition spiking neural network (SNN)
ISSN号2379-8920
DOI10.1109/TCDS.2023.3327081
通讯作者Lu, Yanfeng(yanfeng.lv@ia.ac.cn)
英文摘要The spiking neural networks (SNNs), as brain-inspired neural networks, have received noteworthy attention due to their advantages of low power consumption, high parallelism, and high fault tolerance. While SNNs have shown promising results in uni-modal data tasks, their deployment in multimodal audiovisual classification remains limited, and the effectiveness of capturing correlations between visual and audio modalities in SNNs needs improvement. To address these challenges, we propose a novel model called spiking multimodel transformer (SMMT) that combines SNNs and Transformers for multimodal audiovisual classification. The SMMT model integrates uni-modal subnetworks for visual and auditory modalities with a novel spiking cross-attention module for fusion, enhancing the correlation between visual and audio modalities. This approach leads to competitive accuracy in multimodal classification tasks with low energy consumption, making it an effective and energy-efficient solution. Extensive experiments on a public event-based data set (N-TIDIGIT&MNIST-DVS) and two self-made audiovisual data sets of real-world objects (CIFAR10-AV and UrbanSound8K-AV) demonstrate the effectiveness and energy efficiency of the proposed SMMT model in multimodal audiovisual classification tasks. Our constructed multimodal audiovisual data sets can be accessed at https://github.com/Guo-Lingyue/SMMT.
WOS关键词AFFECT RECOGNITION
资助项目National Key Research and Development Plan of China
WOS研究方向Computer Science ; Robotics ; Neurosciences & Neurology
语种英语
WOS记录号WOS:001247154200024
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构National Key Research and Development Plan of China
源URL[http://ir.ia.ac.cn/handle/173211/59073]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_机器人应用与理论组
通讯作者Lu, Yanfeng
作者单位1.Chinese Acad Sci CASIA, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci UCAS, Sch Artificial Intelligence, Beijing 100049, Peoples R China
3.Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
推荐引用方式
GB/T 7714
Guo, Lingyue,Gao, Zeyu,Qu, Jinye,et al. Transformer-Based Spiking Neural Networks for Multimodal Audiovisual Classification[J]. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS,2024,16(3):1077-1086.
APA Guo, Lingyue.,Gao, Zeyu.,Qu, Jinye.,Zheng, Suiwu.,Jiang, Runhao.,...&Qiao, Hong.(2024).Transformer-Based Spiking Neural Networks for Multimodal Audiovisual Classification.IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS,16(3),1077-1086.
MLA Guo, Lingyue,et al."Transformer-Based Spiking Neural Networks for Multimodal Audiovisual Classification".IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS 16.3(2024):1077-1086.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。