Complex Dynamic Neurons Improved Spiking Transformer Network for Efficient Automatic Speech Recognition
文献类型:会议论文
作者 | Qingyu Wang1,2![]() ![]() ![]() ![]() ![]() |
出版日期 | 2023-05 |
会议日期 | 2023-2-9 |
会议地点 | Washington D.C., USA |
英文摘要 | The spiking neural network (SNN) using leaky-integrated-and-fire (LIF) neurons has been commonly used in automatic speech recognition (ASR) tasks. However, the LIF neuron is still relatively simple compared to that in the biological brain. Further research on more types of neurons with different scales of neuronal dynamics is necessary. Here we introduce four types of neuronal dynamics to post-process the sequential patterns generated from the spiking transformer to get the complex dynamic neuron improved spiking transformer neural network (DyTr-SNN). We found that the DyTr-SNN could handle the non-toy automatic speech recognition task well, representing a lower phoneme error rate, lower computational cost, and higher robustness. These results indicate that the further cooperation of SNNs and neural dynamics at the neuron and network scales might have much in store for the future, especially on the ASR tasks. |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/52078] ![]() |
专题 | 数字内容技术与服务研究中心_听觉模型与认知计算 |
通讯作者 | Tielin Zhang; Bo Xu |
作者单位 | 1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 2.Institute of Automation, Chinese Academy of Sciences, Beijing, China 3.Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China 4.School of Artificial Intelligence, Jilin University, Changchun, China |
推荐引用方式 GB/T 7714 | Qingyu Wang,Tielin Zhang,Minglun Han,et al. Complex Dynamic Neurons Improved Spiking Transformer Network for Efficient Automatic Speech Recognition[C]. 见:. Washington D.C., USA. 2023-2-9. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。