中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
FastTuning: Enabling Fast and Efficient Hyper-Parameter Tuning With Partitioning and Parallelism of Search Space

文献类型:期刊论文

作者Li, Xiaqing1,2; Guo, Qi1,2; Zhang, Guangyan1,2; Ye, Siwei1,2; He, Guanhua1,2; Yao, Yiheng1,2; Zhang, Rui1,2; Hao, Yifan1,2; Du, Zidong1,2; Zheng, Weimin1,2
刊名IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
出版日期2024-07-01
卷号35期号:7页码:1174-1188
关键词Deep learning distributed hyper-parameter tuning (HPT) system parallel computing
ISSN号1045-9219
DOI10.1109/TPDS.2024.3386939
英文摘要Hyper-parameter tuning (HPT) for deep learning (DL) models is prohibitively expensive. Sequential model-based optimization (SMBO) emerges as the state-of-the-art (SOTA) approach to automatically optimize HPT performance due to its heuristic advantages. Unfortunately, focusing on algorithm optimization rather than a large-scale parallel HPT system, existing SMBO-based approaches still cannot effectively remove their strong sequential nature, posing two performance problems: (1) extremely low tuning speed and (2) sub-optimal model quality. In this paper, we propose FastTuning, a fast, scalable, and generic system aiming at parallelly accelerating SMBO-based HPT for large DL/ML models. The key is to partition the highly complex search space into multiple smaller sub-spaces, each of which is assigned to and optimized by a different tuning worker in parallel. However, determining the right level of resource allocation to strike a balance between quality and cost remains a challenge. To address this, we further propose NIMBLE, a dynamic scheduling strategy that is specially designed for FastTuning, including (1) Dynamic Elimination Algorithm, (2) Sub-space Re-division, and (3) Posterior Information Sharing. Finally, we incorporate 6 SOTAs (i.e., 3 tuning algorithms and 3 parallel tuning tools) into FastTuning. Experimental results, on ResNet18, VGG19, ResNet50, and ResNet152, show that FastTuning can consistently offer much faster tuning speed (up to 80x) with better accuracy (up to 4.7% improvement), thereby enabling the application of automatic HPT to real-life DL models.
资助项目National Key R#x0026;D Program of China
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:001224174400001
出版者IEEE COMPUTER SOC
源URL[http://119.78.100.204/handle/2XEOYT63/40076]  
专题中国科学院计算技术研究所期刊论文_英文
通讯作者Guo, Qi
作者单位1.Chinese Acad Sci, Inst Comp Technol, Beijing 100045, Peoples R China
2.Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
推荐引用方式
GB/T 7714
Li, Xiaqing,Guo, Qi,Zhang, Guangyan,et al. FastTuning: Enabling Fast and Efficient Hyper-Parameter Tuning With Partitioning and Parallelism of Search Space[J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,2024,35(7):1174-1188.
APA Li, Xiaqing.,Guo, Qi.,Zhang, Guangyan.,Ye, Siwei.,He, Guanhua.,...&Zheng, Weimin.(2024).FastTuning: Enabling Fast and Efficient Hyper-Parameter Tuning With Partitioning and Parallelism of Search Space.IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS,35(7),1174-1188.
MLA Li, Xiaqing,et al."FastTuning: Enabling Fast and Efficient Hyper-Parameter Tuning With Partitioning and Parallelism of Search Space".IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS 35.7(2024):1174-1188.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。