中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Mixspeech: Data augmentation for low-resource automatic speech recognition

文献类型:会议论文

作者Meng Linghui1,2; Xu Jin; Tan Xu; Wang Jindong; Qin Tao; Xu Bo1,2
出版日期2021-06
会议日期2021.6.6-2021.6.11
会议地点Toronto, Canada
英文摘要

In this paper, we propose MixSpeech, a simple yet effective data augmentation method based on mixup for automatic speech recognition (ASR). MixSpeech trains an ASR model by taking a weighted combination of two different speech features (e.g., mel-spectrograms or MFCC) as the input, and recognizing both text sequences, where the two recognition losses use the same combination weight. We apply MixSpeech on two popular end-to-end speech recognition models including LAS (Listen, Attend and Spell) and Transformer, and conduct experiments on several low-resource datasets including TIMIT, WSJ, and HKUST. Experimental results show that MixSpeech achieves better accuracy than the baseline models without data augmentation, and outperforms a strong data augmentation method SpecAugment on these recognition tasks. Specifically, MixSpeech outperforms SpecAugment with a relative PER improvement of 10.6% on TIMIT dataset, and achieves a strong WER of 4.7% on WSJ dataset.

源URL[http://ir.ia.ac.cn/handle/173211/57334]  
专题数字内容技术与服务研究中心_听觉模型与认知计算
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Meng Linghui,Xu Jin,Tan Xu,et al. Mixspeech: Data augmentation for low-resource automatic speech recognition[C]. 见:. Toronto, Canada. 2021.6.6-2021.6.11.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。