中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
EAT-NAS: elastic architecture transfer for accelerating large-scale neural architecture search

文献类型:期刊论文

;
作者Fang, Jiemin1,2; Chen, Yukang4; Zhang, Xinbang4; Zhang, Qian3; Huang, Chang3; Meng, Gaofeng4; Liu, Wenyu2; Wang, Xinggang2
刊名SCIENCE CHINA-INFORMATION SCIENCES ; SCIENCE CHINA-INFORMATION SCIENCES
出版日期2021-09-01 ; 2021-09-01
卷号64期号:9页码:13
ISSN号1674-733X ; 1674-733X
关键词architecture transfer architecture transfer neural architecture search evolutionary algorithm large-scale dataset neural architecture search evolutionary algorithm large-scale dataset
DOI10.1007/s11432-020-3112-8 ; 10.1007/s11432-020-3112-8
通讯作者Wang, Xinggang(xgwang@hust.edu.cn)
英文摘要Neural architecture search (NAS) methods have been proposed to relieve human experts from tedious architecture engineering. However, most current methods are constrained in small-scale search owing to the issue of huge computational resource consumption. Meanwhile, the direct application of architectures searched on small datasets to large datasets often bears no performance guarantee due to the discrepancy between different datasets. This limitation impedes the wide use of NAS on large-scale tasks. To overcome this obstacle, we propose an elastic architecture transfer mechanism for accelerating large-scale NAS (EAT-NAS). In our implementations, the architectures are first searched on a small dataset, e.g., CIFAR-10. The best one is chosen as the basic architecture. The search process on a large dataset, e.g., ImageNet, is initialized with the basic architecture as the seed. The large-scale search process is accelerated with the help of the basic architecture. We propose not only a NAS method but also a mechanism for architecture-level transfer learning. In our experiments, we obtain two final models EATNet-A and EATNet-B, which achieve competitive accuracies of 75.5% and 75.6%, respectively, on ImageNet. Both the models also surpass the models searched from scratch on ImageNet under the same settings. For the computational cost, EAT-NAS takes only fewer than 5 days using 8 TITAN X GPUs, which is significantly less than the computational consumption of the state-of-the-art large-scale NAS methods.;

Neural architecture search (NAS) methods have been proposed to relieve human experts from tedious architecture engineering. However, most current methods are constrained in small-scale search owing to the issue of huge computational resource consumption. Meanwhile, the direct application of architectures searched on small datasets to large datasets often bears no performance guarantee due to the discrepancy between different datasets. This limitation impedes the wide use of NAS on large-scale tasks. To overcome this obstacle, we propose an elastic architecture transfer mechanism for accelerating large-scale NAS (EAT-NAS). In our implementations, the architectures are first searched on a small dataset, e.g., CIFAR-10. The best one is chosen as the basic architecture. The search process on a large dataset, e.g., ImageNet, is initialized with the basic architecture as the seed. The large-scale search process is accelerated with the help of the basic architecture. We propose not only a NAS method but also a mechanism for architecture-level transfer learning. In our experiments, we obtain two final models EATNet-A and EATNet-B, which achieve competitive accuracies of 75.5% and 75.6%, respectively, on ImageNet. Both the models also surpass the models searched from scratch on ImageNet under the same settings. For the computational cost, EAT-NAS takes only fewer than 5 days using 8 TITAN X GPUs, which is significantly less than the computational consumption of the state-of-the-art large-scale NAS methods.

资助项目National Natural Science Foundation of China (NSFC)[61876212] ; National Natural Science Foundation of China (NSFC)[61876212] ; National Natural Science Foundation of China (NSFC)[61976208] ; National Natural Science Foundation of China (NSFC)[61733007] ; Zhejiang Lab[2019NB0AB02] ; HUST-Horizon Computer Vision Research Center ; National Natural Science Foundation of China (NSFC)[61976208] ; National Natural Science Foundation of China (NSFC)[61733007] ; Zhejiang Lab[2019NB0AB02] ; HUST-Horizon Computer Vision Research Center
WOS研究方向Computer Science ; Computer Science ; Engineering ; Engineering
语种英语 ; 英语
出版者SCIENCE PRESS ; SCIENCE PRESS
WOS记录号WOS:000685212100001 ; WOS:000685212100001
资助机构National Natural Science Foundation of China (NSFC) ; National Natural Science Foundation of China (NSFC) ; Zhejiang Lab ; HUST-Horizon Computer Vision Research Center ; Zhejiang Lab ; HUST-Horizon Computer Vision Research Center
源URL[http://ir.ia.ac.cn/handle/173211/45688]  
专题自动化研究所_模式识别国家重点实验室_遥感图像处理团队
通讯作者Wang, Xinggang
作者单位1.Huazhong Univ Sci & Technol, Inst Artificial Intelligence, Wuhan 430074, Peoples R China
2.Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
3.Horizon Robot, Beijing 100089, Peoples R China
4.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Fang, Jiemin,Chen, Yukang,Zhang, Xinbang,et al. EAT-NAS: elastic architecture transfer for accelerating large-scale neural architecture search, EAT-NAS: elastic architecture transfer for accelerating large-scale neural architecture search[J]. SCIENCE CHINA-INFORMATION SCIENCES, SCIENCE CHINA-INFORMATION SCIENCES,2021, 2021,64, 64(9):13, 13.
APA Fang, Jiemin.,Chen, Yukang.,Zhang, Xinbang.,Zhang, Qian.,Huang, Chang.,...&Wang, Xinggang.(2021).EAT-NAS: elastic architecture transfer for accelerating large-scale neural architecture search.SCIENCE CHINA-INFORMATION SCIENCES,64(9),13.
MLA Fang, Jiemin,et al."EAT-NAS: elastic architecture transfer for accelerating large-scale neural architecture search".SCIENCE CHINA-INFORMATION SCIENCES 64.9(2021):13.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。