Compressing Speaker Extraction Model with Ultra-low Precision Quantization and Knowledge Distillation
文献类型:期刊论文
作者 | Yating Huang3,4![]() ![]() ![]() ![]() |
刊名 | Neural Networks
![]() |
出版日期 | 2022-06 |
卷号 | 154页码:13-21 |
英文摘要 | Recently, our proposed speaker extraction model, WASE (learning When to Attend for Speaker Extraction) yielded superior performance over the prior state-of-the-art methods by explicitly modeling onset clue and regarding it as important guidance in speaker extraction tasks. However, it still remains challenging when it comes to the deployments on the resource-constrained devices, where the model must be tiny and fast to perform inference with minimal budget in CPU and memory while keeping the speaker extraction performance. In this work, we utilize model compression techniques to alleviate the problem and propose a lightweight speaker extraction model, TinyWASE, which aims to run on resource-constrained devices. Specifically, we mainly investigate the grouping effects of quantization-aware training and knowledge distillation techniques in the speaker extraction task and propose Distillation-aware Quantization. Experiments on WSJ0-2mix dataset show that our proposed model can achieve comparable performance as the full-precision model while reducing the model size using ultra-low bits (e.g. 3 bits), obtaining 8.97x compression ratio and 2.15 MB model size. We further show that TinyWASE can combine with other model compression techniques, such as parameter sharing, to achieve compression ratio as high as 23.81 with limited performance degradation. Our code is available at https://github.com/aispeech-lab/TinyWASE. |
源URL | [http://ir.ia.ac.cn/handle/173211/49724] ![]() |
专题 | 数字内容技术与服务研究中心_听觉模型与认知计算 |
通讯作者 | Jiaming Xu |
作者单位 | 1.Center for Excellence in Brain Science and Intelligence Technology, CAS, Shanghai, China 2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 3.School of Future Technology, University of Chinese Academy of Sciences, Beijing, China 4.Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China |
推荐引用方式 GB/T 7714 | Yating Huang,Yunzhe Hao,Jiaming Xu,et al. Compressing Speaker Extraction Model with Ultra-low Precision Quantization and Knowledge Distillation[J]. Neural Networks,2022,154:13-21. |
APA | Yating Huang,Yunzhe Hao,Jiaming Xu,&Bo Xu.(2022).Compressing Speaker Extraction Model with Ultra-low Precision Quantization and Knowledge Distillation.Neural Networks,154,13-21. |
MLA | Yating Huang,et al."Compressing Speaker Extraction Model with Ultra-low Precision Quantization and Knowledge Distillation".Neural Networks 154(2022):13-21. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。