Speech-Transformer: A No-Recurrence Sequence-to-Sequence Model for Speech Recognition
文献类型:会议论文
作者 | Dong, Linhao1,2![]() ![]() ![]() |
出版日期 | 2018-04 |
会议日期 | 2018-04 |
会议地点 | Calgary, Canada |
关键词 | speech recognition sequence-to-sequence attention transformer |
页码 | 5884-5888 |
英文摘要 | Recurrent sequence-to-sequence models using encoder-decoder architecture have made great progress in speech recognition task. However, they suffer from the drawback of slow training speed because the internal recurrence limits the training parallelization. In this paper, we present the Speech-Transformer, a no-recurrence sequence-to-sequence model entirely relies on attention mechanisms to learn the positional dependencies, which can be trained faster with more efficiency. We also propose a 2D-Attention mechanism, which can jointly attend to the time and frequency axes of the 2-dimensional speech inputs, thus providing more expressive representations for the Speech-Transformer. Evaluated on the Wall Street Journal (WSJ) speech recognition dataset, our best model achieves competitive word error rate (WER) of 10.9%, while the whole training process only takes 1.2 days on 1 GPU, significantly faster than the published results of recurrent sequence-to-sequence models. |
产权排序 | 1 |
会议录出版者 | IEEE Xplore |
语种 | 英语 |
资助项目 | Beijing Science and Technology Program[Z171100002217015] |
源URL | [http://ir.ia.ac.cn/handle/173211/39274] ![]() |
专题 | 数字内容技术与服务研究中心_听觉模型与认知计算 |
作者单位 | 1.University of Chinese Academy of Sciences, China 2.Institute of Automation, Chinese Academy of Sciences, China |
推荐引用方式 GB/T 7714 | Dong, Linhao,Xu, Shuang,Xu, Bo. Speech-Transformer: A No-Recurrence Sequence-to-Sequence Model for Speech Recognition[C]. 见:. Calgary, Canada. 2018-04. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。