中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression

文献类型:期刊论文

作者Yufan Liu1,5; Jiajiong Cao3; Bing Li2,5; Weiming Hu1,5,6; Stephen Maybank4
刊名IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI)
出版日期2022-06
卷号45期号:3页码:3378-3395
英文摘要

Deep learning shows excellent performance usually at the expense of heavy computation. Recently, model compression has become a popular way of reducing the computation. Compression can be achieved using knowledge distillation or filter pruning. Knowledge distillation improves the accuracy of a lightweight network, while filter pruning removes redundant architecture in a cumbersome network. They are two different ways of achieving model compression, but few methods simultaneously consider both of them. In this paper, we revisit model compression and define two attributes of a model: distillability and sparsability, which reflect how much useful knowledge can be distilled and how many pruned ratios can be obtained, respectively. Guided by our observations and considering both accuracy and model size, a dynamically distillability-and-sparsability learning framework (DDSL) is introduced for model compression. DDSL consists of teacher, student and dean. Knowledge is distilled from the teacher to guide the student. The dean controls the training process by dynamically adjusting the distillation supervision and the sparsity supervision in a meta-learning framework. An alternating direction method of multiplier (ADMM)-based knowledge distillation-with-pruning (KDP) joint optimization algorithm is proposed to train the model. Extensive experimental results show that DDSL outperforms 24 state-of-the-art methods, including both knowledge distillation and filter pruning methods.

源URL[http://ir.ia.ac.cn/handle/173211/51487]  
专题自动化研究所_模式识别国家重点实验室_视频内容安全团队
作者单位1.the School of Artificial Intelligence, University of Chinese Academy of Sciences
2.People AI, Inc.
3.Ant Group
4.the Department of Computer Science and Information Systems, Birkbeck College, University of London
5.Institution of Automation, Chinese Academy of Sciences
6.the CAS Center for Excellence in Brain Science and Intelligence Technology
推荐引用方式
GB/T 7714
Yufan Liu,Jiajiong Cao,Bing Li,et al. Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI),2022,45(3):3378-3395.
APA Yufan Liu,Jiajiong Cao,Bing Li,Weiming Hu,&Stephen Maybank.(2022).Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression.IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI),45(3),3378-3395.
MLA Yufan Liu,et al."Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression".IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI) 45.3(2022):3378-3395.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。