中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Sparser spiking activity can be better: Feature Refine-and-Mask spiking neural network for event-based visual recognition

文献类型:期刊论文

作者Yao, Man2,3; Zhang, Hengyu2,4; Zhao, Guangshe2; Zhang, Xiyu2; Wang, Dingheng5; Cao, Gang6; Li, Guoqi1,3
刊名NEURAL NETWORKS
出版日期2023-09-01
卷号166页码:410-423
ISSN号0893-6080
关键词Spiking neural network Event-based vision Neuromorphic computing Attention mechanism Brain-inspired computing
DOI10.1016/j.neunet.2023.07.008
通讯作者Zhao, Guangshe(zhaogs@mail.xjtu.edu.cn) ; Li, Guoqi(guoqi.li@ia.ac.cn)
英文摘要Event-based visual, a new visual paradigm with bio-inspired dynamic perception and & mu;s level temporal resolution, has prominent advantages in many specific visual scenarios and gained much research interest. Spiking neural network (SNN) is naturally suitable for dealing with event streams due to its temporal information processing capability and event-driven nature. However, existing works SNN neglect the fact that the input event streams are spatially sparse and temporally non-uniform, and just treat these variant inputs equally. This situation interferes with the effectiveness and efficiency of existing SNNs. In this paper, we propose the feature Refine-and-Mask SNN (RM-SNN), which has the ability of self-adaption to regulate the spiking response in a data-dependent way. We use the Refine-and-Mask (RM) module to refine all features and mask the unimportant features to optimize the membrane potential of spiking neurons, which in turn drops the spiking activity. Inspired by the fact that not all events in spatio-temporal streams are task-relevant, we execute the RM module in both temporal and channel dimensions. Extensive experiments on seven event-based benchmarks, DVS128 Gesture, DVS128 Gait, CIFAR10-DVS, N-Caltech101, DailyAction-DVS, UCF101-DVS, and HMDB51-DVS demonstrate that under the multi-scale constraints of input time window, RM-SNN can significantly reduce the network average spiking activity rate while improving the task performance. In addition, by visualizing spiking responses, we analyze why sparser spiking activity can be better. Code & COPY; 2023 Elsevier Ltd. All rights reserved.
WOS关键词INTELLIGENCE ; DEEPER
资助项目National Natural Science Foundation of China[61836004] ; National Natural Science Foundation of China[62236009] ; National Natural Science Foundation of China[U22A20103] ; National Key Ramp;D Program of China[2020AAA0105200] ; Beijing Natural Science Foundation for Distinguished Young Scholars[JQ21015] ; Pengcheng Lab
WOS研究方向Computer Science ; Neurosciences & Neurology
语种英语
出版者PERGAMON-ELSEVIER SCIENCE LTD
WOS记录号WOS:001070932700001
资助机构National Natural Science Foundation of China ; National Key Ramp;D Program of China ; Beijing Natural Science Foundation for Distinguished Young Scholars ; Pengcheng Lab
源URL[http://ir.ia.ac.cn/handle/173211/53137]  
专题脑图谱与类脑智能实验室
通讯作者Zhao, Guangshe; Li, Guoqi
作者单位1.Chinese Acad Sci, Inst Automat, Beijing 100089, Peoples R China
2.Xi An Jiao Tong Univ, Sch Automat Sci & Engn, Xian 710049, Shaanxi, Peoples R China
3.Peng Cheng Lab, Shenzhen 518000, Peoples R China
4.Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Shenzhen 518000, Peoples R China
5.Northwest Inst Mech & Elect Engn, Xianyang, Shaanxi, Peoples R China
6.Beijing Acad Artificial Intelligence, Beijing 100089, Peoples R China
推荐引用方式
GB/T 7714
Yao, Man,Zhang, Hengyu,Zhao, Guangshe,et al. Sparser spiking activity can be better: Feature Refine-and-Mask spiking neural network for event-based visual recognition[J]. NEURAL NETWORKS,2023,166:410-423.
APA Yao, Man.,Zhang, Hengyu.,Zhao, Guangshe.,Zhang, Xiyu.,Wang, Dingheng.,...&Li, Guoqi.(2023).Sparser spiking activity can be better: Feature Refine-and-Mask spiking neural network for event-based visual recognition.NEURAL NETWORKS,166,410-423.
MLA Yao, Man,et al."Sparser spiking activity can be better: Feature Refine-and-Mask spiking neural network for event-based visual recognition".NEURAL NETWORKS 166(2023):410-423.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。