中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks

文献类型:期刊论文

作者Li, Hongyan4; Lu, Hang2,3; Wang, Haoxuan4; Deng, Shengji1; Li, Xiaowei4
刊名IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS
出版日期2023
卷号31期号:1页码:90-103
关键词Deep learning accelerator deep neural network (DNN) hardware runtime pruning
ISSN号1063-8210
DOI10.1109/TVLSI.2022.3221732
英文摘要Classic deep neural network (DNN) pruning mostly leverages software-based methodologies to tackle the accuracy/speed tradeoff, which involves complicated procedures such as critical parameter searching, fine-tuning, and sparse training to find the best plan. In this article, we explore the opportunities of hardware runtime pruning and propose a regularity-aware hardware runtime pruning methodology, termed "BitXpro" to empower versatile DNN inference. The method targets the bit-level sparsity and the sparsity irregularity in the parameters and pinpoints and prunes the useless bits on-the-fly in the proposed BitXpro accelerator. The versatility of BitXpro lies in: 1) software effortless; 2) orthogonal to the software-based pruning; and 3) multiprecision support (including both floating point and fixed point). Empirical studies on various domain-specific artificial intelligence (AI) tasks highlight the following results: 1) up to 8.27x speedup over the original nonpruned DNN and 10.81x speedup collaborated with the software-pruned DNN; 2) up to 0.3% and 0.04% higher accuracy for the floating- and fixed-point DNNs, respectively; and 3) 6.01x and 8.20x performance improvement over the state-of-the-art accelerators, with 0.068 mm2 and 74.82 mW (floating point 32) and 40.44 mW (16-bit fixed point) power consumption under the TSMC 28-nm technology library.
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:000911286400009
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
源URL[http://119.78.100.204/handle/2XEOYT63/20053]  
专题中国科学院计算技术研究所期刊论文
通讯作者Lu, Hang
作者单位1.Civil Aviat Adm China CAAC, Res Inst 2, Beijing 101318, Peoples R China
2.Chinese Acad Sci, Shanghai Innovat Ctr Processor Technol SHIC, Beijing 100190, Peoples R China
3.Chinese Acad Sci, Zhongguancun Lab, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China
4.Chinese Acad Sci, Inst Comp Technol, State Key Lab Comp Architecture, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Li, Hongyan,Lu, Hang,Wang, Haoxuan,et al. BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks[J]. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS,2023,31(1):90-103.
APA Li, Hongyan,Lu, Hang,Wang, Haoxuan,Deng, Shengji,&Li, Xiaowei.(2023).BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks.IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS,31(1),90-103.
MLA Li, Hongyan,et al."BitXpro: Regularity-Aware Hardware Runtime Pruning for Deep Neural Networks".IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS 31.1(2023):90-103.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。