中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Semi-supervised Cross-domain Visual Feature Learning for Audio-Visual Broadcast Speech Transcription

文献类型:会议论文

作者Su,Rongfeng; Liu,Xunying; Wang,Lan
出版日期2018
会议日期2018
会议地点HYDERABAD, INDIA
英文摘要Visual information can be incorporated into automatic speech recognition (ASR) systems to improve their robustness in adverse acoustic conditions. Conventional audio-visual speech recognition (AVSR) systems require highly specialized audio-visual (AV) data in both system training and evaluation. For many real-world speech recognition applications, only audio information is available. This presents a major challenge to a wider application of AVSR systems. In order to address this challenge, this paper proposes a semi-supervised visual feature learning approach for developing AVSR systems on a DARPA GALE Mandarin broadcast transcription task. Audio to visual feature inversion long short-term memory neural networks (LSTMs) were initially constructed using limited amounts of out of domain AV data. The acoustic features domain mismatch against the broadcast data was further reduced using multi-level domain adaptive deep networks. Visual features were then automatically generated from the broadcast speech audio and used in both AVSR system training and testing time. Experimental results suggest a CNN based AVSR system using the proposed semi-supervised cross-domain audio-to-visual feature generation technique outperformed the baseline audio only CNN ASR system by an average CER reduction of 6.8% relative. In particular, on the most difficult Phoenix TV subset, a CER reduction of 1.32% absolute (8.34% relative) was obtained.
源URL[http://ir.siat.ac.cn:8080/handle/172644/13712]  
专题深圳先进技术研究院_集成所
推荐引用方式
GB/T 7714
Su,Rongfeng,Liu,Xunying,Wang,Lan. Semi-supervised Cross-domain Visual Feature Learning for Audio-Visual Broadcast Speech Transcription[C]. 见:. HYDERABAD, INDIA. 2018.

入库方式: OAI收割

来源:深圳先进技术研究院

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。