中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Spike-Triggered Non-Autoregressive Transformer for End-to-End Speech Recognition

文献类型:会议论文

作者Zhengkun Tian1,3; Jiangyan Yi1,3; Jianhua Tao1,2,3; Ye Bai1,3; Shuai Zhang1,3; Zhengqi Wen1,3
出版日期2020-10
会议日期October 25–29, 2020
会议地点Shanghai, China
英文摘要

Non-autoregressive transformer models have achieved extremely
fast inference speed and comparable performance with autoregressive sequence-to-sequence models in neural machine translation. Most of the non-autoregressive transformers decode the target sequence from a predefined-length mask sequence. If the predefined length is too long, it will cause a lot of redundant calculations. If the predefined length is shorter than the length of the target sequence, it will hurt the performance of the model. To address this problem and improve the inference speed, we propose a spike-triggered non-autoregressive transformer model for end-to-end speech recognition, which introduces a CTC module to predict the length of the target sequence and accelerate the convergence. All the experiments are conducted on a public Chinese mandarin dataset AISHELL-1. The results show that the proposed model can accurately predict the length of the target sequence and achieve a competitive performance with the advanced transformers. What’s more, the model even achieves a real-time factor of 0.0056, which exceeds all
mainstream speech recognition models.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/48607]  
专题模式识别国家重点实验室_智能交互
通讯作者Jianhua Tao
作者单位1.NLPR, Institute of Automation, Chinese Academy of Sciences
2.CAS Center for Excellence in Brain Science and Intelligence Technology
3.School of Artificial Intelligence, University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Zhengkun Tian,Jiangyan Yi,Jianhua Tao,et al. Spike-Triggered Non-Autoregressive Transformer for End-to-End Speech Recognition[C]. 见:. Shanghai, China. October 25–29, 2020.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。