中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Audio-Visual Speech Separation with Visual Features Enhanced by Adversarial Training

文献类型:会议论文

作者Zhang Peng1,4; Xu Jiaming1,4; Shi Jing1; Hao Yunzhe1; Qin Lei2; Xu Bo1,3,4
出版日期2021
会议日期2021-7-18
会议地点线上会议
关键词audio-visual speech separation robust adversarial training method time-domain approach
卷号0
期号0
DOI0
页码0
英文摘要

Audio-visual speech separation (AVSS) refers to separating individual voice from an audio mixture of multiple simultaneous talkers by conditioning on visual features. For the AVSS task, visual features play an important role, based on which we manage to extract more effective visual features to improve the performance. In this paper, we propose a novel AVSS model that uses speech-related visual features for isolating the target speaker. Specifically, the method of extracting speechrelated visual features has two steps. Firstly, we extract the visual features that contain speech-related information by learning joint audio-visual representation. Secondly, we use the adversarial training method to enhance speech-related information in visual features further. We adopt the time-domain approach and build audio-visual speech separation networks with temporal convolutional neural networks block. Experiments on four audio-visual datasets, including GRID, TCD-TIMIT, AVSpeech, and LRS2, show that our model significantly outperforms previous state-ofthe-art AVSS models. We also demonstrate that our model can achieve excellent speech separation performance in noisy realworld scenarios. Moreover, in order to alleviate the performance degradation of AVSS models caused by the missing of some video frames, we propose a training strategy, which makes our model robust when video frames are partially missing. The demo, code, and supplementary materials can be available at https://github.com/aispeech-lab/advr-avss

源文献作者INNS-International Neural Network Society ; IEEE-Computational Intelligence Society
产权排序1
会议录0
会议录出版者0
会议录出版地0
语种英语
URL标识查看原文
源URL[http://ir.ia.ac.cn/handle/173211/44910]  
专题数字内容技术与服务研究中心_听觉模型与认知计算
中国科学院自动化研究所
通讯作者Xu Jiaming; Xu Bo
作者单位1.Institute of Automation, Chinese Academy of Science
2.Huawei Consumer Business Group
3.Center for Excellence in Brain Science and Intelligence Technology
4.School of Artificial Intelligence, University of Chinese Academy of Science
推荐引用方式
GB/T 7714
Zhang Peng,Xu Jiaming,Shi Jing,et al. Audio-Visual Speech Separation with Visual Features Enhanced by Adversarial Training[C]. 见:. 线上会议. 2021-7-18.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。