Gated Recurrent Fusion With Joint Training Framework for Robust End-to-End Speech Recognition
文献类型:期刊论文
作者 | Fan, Cunhang1,2; Yi, Jiangyan2; Tao, Jianhua1,2,3; Tian, Zhengkun1,2; Liu, Bin2; Wen, Zhengqi2 |
刊名 | IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING |
出版日期 | 2021 |
期号 | 29页码:198-209 |
ISSN号 | 2329-9290 |
关键词 | Speech enhancement Speech recognition Training Noise measurement Logic gates Acoustic distortion Task analysis Gated recurrent fusion robust end-to-end speech recognition speech distortion speech enhancement speech transformer |
DOI | 10.1109/TASLP.2020.3039600 |
英文摘要 | The joint training framework for speech enhancement and recognition methods have obtained quite good performances for robust end-to-end automatic speech recognition (ASR). However, these methods only utilize the enhanced feature as the input of the speech recognition component, which are affected by the speech distortion problem. In order to address this problem, this paper proposes a gated recurrent fusion (GRF) method with joint training framework for robust end-to-end ASR. The GRF algorithm is used to dynamically combine the noisy and enhanced features. Therefore, the GRF can not only remove the noise signals from the enhanced features, but also learn the raw fine structures from the noisy features so that it can alleviate the speech distortion. The proposed method consists of speech enhancement, GRF and speech recognition. Firstly, the mask based speech enhancement network is applied to enhance the input speech. Secondly, the GRF is applied to address the speech distortion problem. Thirdly, to improve the performance of ASR, the state-of-the-art speech transformer algorithm is used as the speech recognition component. Finally, the joint training framework is utilized to optimize these three components, simultaneously. Our experiments are conducted on an open-source Mandarin speech corpus called AISHELL-1. Experimental results show that the proposed method achieves the relative character error rate (CER) reduction of 10.04% over the conventional joint enhancement and transformer method only using the enhanced features. Especially for the low signal-to-noise ratio (0 dB), our proposed method can achieves better performances with 12.67% CER reduction, which suggests the potential of our proposed method. |
WOS关键词 | DEEP ; NOISE ; SEPARATION ; NETWORKS |
资助项目 | National Key Research and Development Plan of China[2018YFB1005003] ; National Natural Science Foundation of China (NSFC)[61831022] ; National Natural Science Foundation of China (NSFC)[61901473] ; National Natural Science Foundation of China (NSFC)[61771472] ; National Natural Science Foundation of China (NSFC)[61773379] ; Inria-CAS Joint Research Project[173211KYSB20170061] ; Inria-CAS Joint Research Project[173211KYSB20190049] |
WOS研究方向 | Acoustics ; Engineering |
语种 | 英语 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
WOS记录号 | WOS:000597160600015 |
资助机构 | National Key Research and Development Plan of China ; National Natural Science Foundation of China (NSFC) ; Inria-CAS Joint Research Project |
源URL | [http://ir.ia.ac.cn/handle/173211/42783] |
专题 | 模式识别国家重点实验室_智能交互 |
通讯作者 | Yi, Jiangyan; Tao, Jianhua |
作者单位 | 1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China 2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China 3.CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Fan, Cunhang,Yi, Jiangyan,Tao, Jianhua,et al. Gated Recurrent Fusion With Joint Training Framework for Robust End-to-End Speech Recognition[J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING,2021(29):198-209. |
APA | Fan, Cunhang,Yi, Jiangyan,Tao, Jianhua,Tian, Zhengkun,Liu, Bin,&Wen, Zhengqi.(2021).Gated Recurrent Fusion With Joint Training Framework for Robust End-to-End Speech Recognition.IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING(29),198-209. |
MLA | Fan, Cunhang,et al."Gated Recurrent Fusion With Joint Training Framework for Robust End-to-End Speech Recognition".IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING .29(2021):198-209. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。