中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Multi-Granularity Pruning for Model Acceleration on Mobile Devices

文献类型:会议论文

作者Zhao TL(赵天理)4,5; Zhang X(张希)4,5; Zhu WT(朱文涛)1; Wang JX(王家兴)2; Yang S(杨森)3; Liu J(刘季)6; Cheng J(程健)4,5
出版日期2022
会议日期2022-07
会议地点线上
关键词Deep Neural Networks Network Pruning Structured Pruning Non-structured Pruning Single Instruction Multiple Data
英文摘要

For practical deep neural network design on mobile devices, it is essential to consider the constraints incurred by the computational resources and the inference latency in various applications. Among deep network acceleration approaches, pruning is a widely adopted practice to balance the computational resource consumption and the accuracy, where unimportant connections can be removed either channel-wisely or randomly with a minimal impact on model accuracy. The coarse-grained channel pruning instantly results in a significant latency reduction, while the fine-grained weight pruning is more flexible to retain accuracy. In this paper, we present a unified framework for the Joint Channel pruning and Weight pruning, named JCW, which achieves a better pruning proportion between channel and weight pruning. To fully optimize the trade-off between latency and accuracy, we further develop a tailored multi-objective evolutionary algorithm in the JCW framework, which enables one single round search to obtain the accurate candidate architectures for various deployment requirements. Extensive experiments demonstrate that the JCW achieves a better trade-off between the latency and accuracy against previous state-of-the-art pruning methods on the ImageNet classification dataset.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/52090]  
专题类脑芯片与系统研究
通讯作者Cheng J(程健)
作者单位1.亚马逊
2.京东
3.Snap Inc.
4.中科院自动化所
5.中国科学院大学
6.快手
推荐引用方式
GB/T 7714
Zhao TL,Zhang X,Zhu WT,et al. Multi-Granularity Pruning for Model Acceleration on Mobile Devices[C]. 见:. 线上. 2022-07.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。