An Automatic Neural Network Architecture-and-Quantization Joint Optimization Framework for Efficient Model Inference
文献类型:期刊论文
作者 | Liu, Lian1,2; Wang, Ying1,2; Zhao, Xiandong3; Chen, Weiwei; Li, Huawei1,2; Li, Xiaowei1,2; Han, Yinhe1,2 |
刊名 | IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS
![]() |
出版日期 | 2024-05-01 |
卷号 | 43期号:5页码:1497-1510 |
关键词 | Optimization Quantization (signal) Computer architecture Training Computational modeling Integrated circuit modeling Convergence Automatic joint optimization efficient model inference network quantization neural architecture search (NAS) |
ISSN号 | 0278-0070 |
DOI | 10.1109/TCAD.2023.3339438 |
英文摘要 | Efficient deep learning models, especially optimized for edge devices, benefit from low inference latency to efficient energy consumption. Two classical techniques for efficient model inference are lightweight neural architecture search (NAS), which automatically designs compact network models, and quantization, which reduces the bit-precision of neural network models. As a consequence, joint design for both neural architecture and quantization precision settings is becoming increasingly popular. There are three main aspects that affect the performance of the joint optimization between neural architecture and quantization: 1) quantization precision selection (QPS); 2) quantization-aware training (QAT); and 3) NAS. However, existing works focus on at most twofold of these aspects, and result in secondary performance. To this end, we proposed a novel automatic optimization framework, DAQU, that allows jointly searching for Pareto-optimal neural architecture and quantization precision combination among more than $10<^>{47}$ quantized subnet models. To overcome the instability of the conventional automatic optimization framework, DAQU incorporates a warm-up strategy to reduce the accuracy gap among different neural architectures, and a precision-transfer training approach to maintain flexibility among different quantization precision settings. Our experiments show that the quantized lightweight neural networks generated by DAQU consistently outperform state-of-the-art NAS and quantization joint optimization methods. |
资助项目 | National Natural Science Foundation of China (NSFC) |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:001225897600014 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
源URL | [http://119.78.100.204/handle/2XEOYT63/40074] ![]() |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Wang, Ying; Li, Huawei |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Dept Comp Sci, Beijing 100190, Peoples R China 3.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Liu, Lian,Wang, Ying,Zhao, Xiandong,et al. An Automatic Neural Network Architecture-and-Quantization Joint Optimization Framework for Efficient Model Inference[J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS,2024,43(5):1497-1510. |
APA | Liu, Lian.,Wang, Ying.,Zhao, Xiandong.,Chen, Weiwei.,Li, Huawei.,...&Han, Yinhe.(2024).An Automatic Neural Network Architecture-and-Quantization Joint Optimization Framework for Efficient Model Inference.IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS,43(5),1497-1510. |
MLA | Liu, Lian,et al."An Automatic Neural Network Architecture-and-Quantization Joint Optimization Framework for Efficient Model Inference".IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS 43.5(2024):1497-1510. |
入库方式: OAI收割
来源:计算技术研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。