LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation
文献类型:期刊论文
作者 | Xu, Ting-Bing1,2![]() ![]() ![]() ![]() |
刊名 | PATTERN RECOGNITION
![]() |
出版日期 | 2019-04-01 |
卷号 | 88期号:88页码:272-284 |
关键词 | Deep network acceleration and compression Architecture distillation Lightweight network |
ISSN号 | 0031-3203 |
DOI | 10.1016/j.patcog.2018.10.029 |
英文摘要 | In recent years, deep neural networks have achieved remarkable successes in many pattern recognition tasks. However, the high computational cost and large memory overhead hinder them from applications on resource-limited devices. To address this problem, many deep network acceleration and compression methods have been proposed. One group of methods adopt decomposition and pruning techniques to accelerate and compress a pre-trained model. Another group designs single compact unit to stack their own networks. These methods are subject to complicated training processes, or lack of generality and extensibility. In this paper, we propose a general framework of architecture distillation, namely LightweightNet, to accelerate and compress convolutional neural networks. Rather than compressing a pre-trained model, we directly construct the lightweight network based on a baseline network architecture. The Lightweight Net, designed based on a comprehensive analysis of the network architecture, consists of network parameter compression, network structure acceleration, and non-tensor layer improvement. Specifically, we propose the strategy of low-dimensional features of fully-connected layers for substantial memory saving, and design multiple efficient compact blocks to distill convolutional layers of baseline network with accuracy-sensitive distillation rule for notable time saving. Finally, it can effectively reduce the computational cost and the model size by > 4x with negligible accuracy loss. Benchmarks on MNIST, CIFAR-10, ImageNet and HCCR (handwritten Chinese character recognition) datasets demonstrate the advantages of the proposed framework in terms of speed, performance, storage and training process. In HCCR, our method even outperforms traditional handcrafted features-based classifiers in terms of speed and storage while maintaining state-of-the-art recognition performance. (C) 2018 Elsevier Ltd. All rights reserved. |
WOS关键词 | FEATURE-EXTRACTION ; CHARACTER ; RECOGNITION ; NORMALIZATION ; ONLINE |
资助项目 | National Natural Science Foundation of China (NSFC)[61721004] ; National Natural Science Foundation of China (NSFC)[61633021] |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:000457666900021 |
出版者 | ELSEVIER SCI LTD |
源URL | [http://ir.ia.ac.cn/handle/173211/25265] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_模式分析与学习团队 |
通讯作者 | Liu, Cheng-Lin |
作者单位 | 1.Chinese Acad Sci, Inst Automat, NLPR, Beijing 100190, Peoples R China 2.UCAS, Sch Artificial Intelligence, Beijing 100190, Peoples R China 3.CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Xu, Ting-Bing,Yang, Peipei,Zhang, Xu-Yao,et al. LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation[J]. PATTERN RECOGNITION,2019,88(88):272-284. |
APA | Xu, Ting-Bing,Yang, Peipei,Zhang, Xu-Yao,&Liu, Cheng-Lin.(2019).LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation.PATTERN RECOGNITION,88(88),272-284. |
MLA | Xu, Ting-Bing,et al."LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation".PATTERN RECOGNITION 88.88(2019):272-284. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。