中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
SPEAKER-AWARE SPEECH-TRANSFORMER

文献类型:会议论文

作者Fan ZY(范志赟)2,3; Li J(李杰)1; Zhou SY(周世玉)3; Xu B(徐波)3
出版日期2019-12
会议日期2019-12-14
会议地点新加坡
关键词Speech-Transformer, speaker adaptation, end-to-end speech recognition, speaker aware training, i-vector
英文摘要

Recently, end-to-end (E2E) models become a competitive alternative to the conventional hybrid automatic speech recognition (ASR) systems. However, they still suffer from speaker mismatch in training and testing condition. In this paper, we use Speech-Transformer (ST) as the study platform to investigate speaker aware training of E2E models. We propose a model called Speaker-Aware Speech-Transformer (SAST), which is a standard ST equipped with a speaker attention module (SAM). The SAM has a static speaker knowledge block (SKB) that is made of i-vectors. At each time step, the encoder output attends to the i-vectors in the block, and generates a weighted combined speaker embedding vector, which helps the model to normalize the speaker variations. The SAST model trained in this way becomes independent of specific training speakers and thus generalizes better to unseen testing speakers. We investigate different factors of SAM. Experimental results on the AISHELL-1 task show that SAST achieves a relative 6.5% CER reduction (CERR) over the speaker-independent (SI) baseline. Moreover, we demonstrate that SAST still works quite well even if the i-vectors in SKB all come from a different data source other than the acoustic training set.

源URL[http://ir.ia.ac.cn/handle/173211/49728]  
专题数字内容技术与服务研究中心_听觉模型与认知计算
作者单位1.Kwai, Beijing, P.R. China
2.University of Chinese Academy of Sciences, China
3.Institute of Automation, Chinese Academy of Sciences, China
推荐引用方式
GB/T 7714
Fan ZY,Li J,Zhou SY,et al. SPEAKER-AWARE SPEECH-TRANSFORMER[C]. 见:. 新加坡. 2019-12-14.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。