Temporal Knowledge Sharing enable Spiking Neural Network Learning from Past and Future
文献类型:期刊论文
作者 | Dong, Yiting; Zhao, Dongcheng![]() ![]() |
刊名 | IEEE Transactions on Artificial Intelligence
![]() |
出版日期 | 2024 |
页码 | 1-10 |
DOI | 10.1109/TAI.2024.3374268 |
英文摘要 | Spiking Neural Networks (SNNs) have attracted significant attention from researchers across various domains due to their brain-inspired information processing mechanism. However, SNNs typically grapple with challenges such as extended time steps, low temporal information utilization, and the requirement for consistent time step between testing and training. These challenges render SNNs with high latency. Moreover, the constraint on time steps necessitates the retraining of the model for new deployments, reducing adaptability. To address these issues, this paper proposed a novel perspective, viewing the SNN as a temporal aggregation model. We introduced the Temporal Knowledge Sharing (TKS) method, facilitating information interact between different time points. TKS can be perceived as a form of temporal self-distillation. To validate the efficacy of TKS in information processing, we tested it on static datasets like CIFAR10, CIFAR100, ImageNet-1k, and neuromorphic datasets such as DVS-CIFAR10 and NCALTECH101. Experimental results demonstrated that our method achieves state-of-the-art performance compared to other algorithms. Furthermore, TKS addresses the temporal consistency challenge, endowing the model with superior temporal generalization capabilities. This allows the network to train with longer time steps and maintain high performance during testing with shorter time steps. Such an approach considerably accelerates the deployment of SNNs on edge devices. Finally, we conducted ablation experiments and tested TKS on fine-grained tasks, with results showcasing TKS’s enhanced capability to process information efficiently. |
URL标识 | 查看原文 |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/57258] ![]() |
专题 | 类脑智能研究中心_类脑认知计算 |
通讯作者 | Zeng, Yi |
推荐引用方式 GB/T 7714 | Dong, Yiting,Zhao, Dongcheng,Zeng, Yi. Temporal Knowledge Sharing enable Spiking Neural Network Learning from Past and Future[J]. IEEE Transactions on Artificial Intelligence,2024:1-10. |
APA | Dong, Yiting,Zhao, Dongcheng,&Zeng, Yi.(2024).Temporal Knowledge Sharing enable Spiking Neural Network Learning from Past and Future.IEEE Transactions on Artificial Intelligence,1-10. |
MLA | Dong, Yiting,et al."Temporal Knowledge Sharing enable Spiking Neural Network Learning from Past and Future".IEEE Transactions on Artificial Intelligence (2024):1-10. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。