中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient

文献类型:期刊论文

作者Liang, Ling4; Hu, Xing3; Deng, Lei2; Wu, Yujie2; Li, Guoqi2; Ding, Yufei1; Li, Peng4; Xie, Yuan4
刊名IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
出版日期2021-09-01
页码15
关键词Spatiotemporal phenomena Computational modeling Perturbation methods Biological neural networks Backpropagation Unsupervised learning Training Adversarial attack backpropagation through time (BPTT) neuromorphic computing spike-compatible gradient spiking neural networks (SNNs)
ISSN号2162-237X
DOI10.1109/TNNLS.2021.3106961
英文摘要Spiking neural network (SNN) is broadly deployed in neuromorphic devices to emulate brain function. In this context, SNN security becomes important while lacking in-depth investigation. To this end, we target the adversarial attack against SNNs and identify several challenges distinct from the artificial neural network (ANN) attack: 1) current adversarial attack is mainly based on gradient information that presents in a spatiotemporal pattern in SNNs, hard to obtain with conventional backpropagation algorithms; 2) the continuous gradient of the input is incompatible with the binary spiking input during gradient accumulation, hindering the generation of spike-based adversarial examples; and 3) the input gradient can be all-zeros (i.e., vanishing) sometimes due to the zero-dominant derivative of the firing function. Recently, backpropagation through time (BPTT)-inspired learning algorithms are widely introduced into SNNs to improve the performance, which brings the possibility to attack the models accurately given spatiotemporal gradient maps. We propose two approaches to address the above challenges of gradient-input incompatibility and gradient vanishing. Specifically, we design a gradient-to-spike (G2S) converter to convert continuous gradients to ternary ones compatible with spike inputs. Then, we design a restricted spike flipper (RSF) to construct ternary gradients that can randomly flip the spike inputs with a controllable turnover rate, when meeting all-zero gradients. Putting these methods together, we build an adversarial attack methodology for SNNs. Moreover, we analyze the influence of the training loss function and the firing threshold of the penultimate layer on the attack effectiveness. Extensive experiments are conducted to validate our solution. Besides the quantitative analysis of the influence factors, we also compare SNNs and ANNs against adversarial attacks under different attack methods. This work can help reveal what happens in SNN attacks and might stimulate more research on the security of SNN models and neuromorphic devices.
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:000733549300001
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
源URL[http://119.78.100.204/handle/2XEOYT63/18012]  
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Deng, Lei
作者单位1.Univ Calif Santa Barbara, Dept Comp Sci, Santa Barbara, CA 93106 USA
2.Tsinghua Univ, Ctr Brain Inspired Comp Res, Dept Precis Instrument, Beijing 100084, Peoples R China
3.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China
4.Univ Calif Santa Barbara, Dept Elect & Comp Engn, Santa Barbara, CA 93106 USA
推荐引用方式
GB/T 7714
Liang, Ling,Hu, Xing,Deng, Lei,et al. Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2021:15.
APA Liang, Ling.,Hu, Xing.,Deng, Lei.,Wu, Yujie.,Li, Guoqi.,...&Xie, Yuan.(2021).Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,15.
MLA Liang, Ling,et al."Exploring Adversarial Attack in Spiking Neural Networks With Spike-Compatible Gradient".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2021):15.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。