CLUSTER REGULARIZED QUANTIZATION FOR DEEP NETWORKS COMPRESSION
文献类型:会议论文
作者 | Hu YM(胡一鸣)![]() |
出版日期 | 2019 |
会议日期 | 2019.9.23 |
会议地点 | 台湾,台北国际会议中心 |
关键词 | deep neural networks object classification model compression quantization |
英文摘要 | Deep neural networks (DNNs) have achieved great success in a wide range of computer vision areas, but the applications to mobile devices is limited due to their high storage and computational cost. Much efforts have been devoted to compress DNNs. In this paper, we propose a simple yet effective method for deep networks compression, named Cluster Regularized Quantization (CRQ), which can reduce the presentation precision of a full-precision model to ternary values without significant accuracy drop. In particular, the proposed method aims at reducing the quantization error by introducing a cluster regularization term, which is imposed on the full-precision weights to enable them naturally concentrate around the target values. Through explicitly regularizing the weights during the re-training stage, the full-precision model can achieve the smooth transition to the low-bit one. Comprehensive experiments on benchmark datasets demonstrate the effectiveness of the proposed method. |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/44837] ![]() |
专题 | 精密感知与控制研究中心_精密感知与控制 |
作者单位 | 1.School of Computer and Control Engineering, University of Chinese Academy of Sciences 2.Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Hu YM. CLUSTER REGULARIZED QUANTIZATION FOR DEEP NETWORKS COMPRESSION[C]. 见:. 台湾,台北国际会议中心. 2019.9.23. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。