An End-to-end Structure with CTC Encoder and OCD Decoder For Speech Recognition
文献类型:会议论文
作者 | Cheng, Yi1,2![]() ![]() ![]() |
出版日期 | 2019-09 |
会议日期 | 2019-9 |
会议地点 | Graz, Austria |
关键词 | end-to-end, streaming ASR, encoder-decoder, OCD, CTC |
英文摘要 | Real-time streaming speech recognition is required by most applications for a nice interactive experience. To naturally support online recognition, a common strategy used in recently proposed end-to-end models is to introduce a blank label to the label set and instead output alignments. However, generating the alignment means decoding much longer than the length of the linguistic sequence. Besides, there exist several blank labels between two output units in the alignment, which hinders models from learning the adjacent dependency of units in the target sequence. In this work, we propose an innovative encoder-decoder structure, called ECTC-DOCD, for online speech recognition which directly predicts the linguistic sequence without blank labels. Apart from the encoder and decoder structures, ECTC-DOCD contains an additional shrinking layer to drop the redundant acoustic information. This layer serves as a bridge connecting acoustic representation and linguistic modelling parts. Through experiments, we confirm that ECTC-DOCD can obtain better performance than a strong CTC model in online ASR tasks. We also show that ECTC-DOCD can achieve promising results on both Mandarin and English ASR datasets with first and second pass decoding. |
源URL | [http://ir.ia.ac.cn/handle/173211/44878] ![]() |
专题 | 数字内容技术与服务研究中心_听觉模型与认知计算 |
通讯作者 | Cheng, Yi |
作者单位 | 1.University of Chinese Academy of Sciences, China 2.Institute of Automation, Chinese Academy of Sciences, China |
推荐引用方式 GB/T 7714 | Cheng, Yi,Feng, Wang,Bo, Xu. An End-to-end Structure with CTC Encoder and OCD Decoder For Speech Recognition[C]. 见:. Graz, Austria. 2019-9. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。