中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
TWO-STAGE PRE-TRAINING FOR SEQUENCE TO SEQUENCE SPEECH RECOGNITION

文献类型:会议论文

作者Fan ZY(范志赟)1,2; Zhou SY(周世玉)2; Xu B(徐波)2
出版日期2021-09
会议日期2021-7-18
会议地点线上会议
关键词pre-training speech recognition encoder-decoder sequence-to-sequence
英文摘要

The attention-based encoder-decoder structure is popular in automatic speech recognition (ASR). However, it relies heavily on transcribed data. In this paper, we propose a novel pre-training strategy for the encoder-decoder sequence-to-sequence (seq2seq) model by utilizing unpaired speech and transcripts. The pre-training process consists of two stages, acoustic pre-training and linguistic pre-training. In the acoustic pre-training stage, we use a large amount of speech to pre-train the encoder by predicting masked speech feature chunks with their contexts. In the linguistic pre-training stage, we first generate synthesized speech from a large number of transcripts using a text-to-speech (TTS) system and then use the synthesized paired data to pre-train the decoder. The two-stage pre-training is conducted on the AISHELL-2 dataset, and we apply this pre-trained model to multiple subsets of AISHELL-1 and HKUST for post-training. As the size of the subset increases, we obtain relative character error rate reduction (CERR) from 38.24% to 7.88% on AISHELL-1 and from 12.00% to 1.20% on HKUST.

源URL[http://ir.ia.ac.cn/handle/173211/49729]  
专题数字内容技术与服务研究中心_听觉模型与认知计算
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences, China
2.Institute of Automation, Chinese Academy of Sciences, China
推荐引用方式
GB/T 7714
Fan ZY,Zhou SY,Xu B. TWO-STAGE PRE-TRAINING FOR SEQUENCE TO SEQUENCE SPEECH RECOGNITION[C]. 见:. 线上会议. 2021-7-18.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。