中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks

文献类型:期刊论文

作者Cheng, Jian3,4,5; Wu, Jiaxiang2,3,4; Leng, Cong3,4; Wang, Yuhang1,3,4; Hu, Qinghao3,4
刊名IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
出版日期2018-10-01
卷号29期号:10页码:4730-4743
关键词Acceleration And Compression Convolutional Neural Network (Cnn) Mobile Devices Product Quantization
ISSN号2162-237X
DOI10.1109/TNNLS.2017.2774288
文献子类Article
英文摘要We are witnessing an explosive development and widespread application of deep neural networks (DNNs) in various fields. However, DNN models, especially a convolutional neural network (CNN), usually involve massive parameters and are computationally expensive, making them extremely dependent on high-performance hardware. This prohibits their further extensions, e.g., applications on mobile devices. In this paper, we present a quantized CNN, a unified approach to accelerate and compress convolutional networks. Guided by minimizing the approximation error of individual layer's response, both fully connected and convolutional layers are carefully quantized. The inference computation can be effectively carried out on the quantized network, with much lower memory and storage consumption. Quantitative evaluation on two publicly available benchmarks demonstrates the promising performance of our approach: with comparable classification accuracy, it achieves 4 to 6x acceleration and 15 to 20x compression. With our method, accurate image classification can even be directly carried out on mobile devices within 1 s.
WOS关键词LEARNING BINARY-CODES ; ITERATIVE QUANTIZATION ; PROCRUSTEAN APPROACH ; RECOGNITION ; FPGAS
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:000445351300015
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构National Natural Science Foundation of China(61332016) ; Scientific Research Key Program of Beijing Municipal Commission of Education(KZ201610005012) ; Fund of Hubei Key Laboratory of Transportation Internet of Things ; Fund of Jiangsu Key Laboratory of Big Data Analysis Technology
源URL[http://ir.ia.ac.cn/handle/173211/27920]  
专题类脑芯片与系统研究
通讯作者Cheng, Jian
作者单位1.UISEE Technol Beijing Ltd, Beijing 102402, Peoples R China
2.Tencent AI Lab, Machine Learning Grp, Shenzhen 518000, Peoples R China
3.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
4.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
5.CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Cheng, Jian,Wu, Jiaxiang,Leng, Cong,et al. Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2018,29(10):4730-4743.
APA Cheng, Jian,Wu, Jiaxiang,Leng, Cong,Wang, Yuhang,&Hu, Qinghao.(2018).Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,29(10),4730-4743.
MLA Cheng, Jian,et al."Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 29.10(2018):4730-4743.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。