中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
You Only Search Once: Single Shot Neural Architecture Search via Direct Sparse Optimization

文献类型:期刊论文

;
作者Zhang, Xinbang1,2; Huang, Zehao3; Wang, Naiyan3; Xiang, Shiming1,2; Pan, Chunhong1
刊名IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE ; IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
出版日期2021-09-01 ; 2021-09-01
卷号43期号:9页码:2891-2904
关键词Computer architecture Computer architecture Optimization Learning (artificial intelligence) Task analysis Acceleration Evolutionary computation Convolution Neural architecture search(NAS) convolution neural network sparse optimization Optimization Learning (artificial intelligence) Task analysis Acceleration Evolutionary computation Convolution Neural architecture search(NAS) convolution neural network sparse optimization
ISSN号0162-8828 ; 0162-8828
DOI10.1109/TPAMI.2020.3020300 ; 10.1109/TPAMI.2020.3020300
通讯作者Zhang, Xinbang(xinbang.zhang@nlpr.ia.ac.cn)
英文摘要Recently neural architecture search (NAS) has raised great interest in both academia and industry. However, it remains challenging because of its huge and non-continuous search space. Instead of applying evolutionary algorithm or reinforcement learning as previous works, this paper proposes a direct sparse optimization NAS (DSO-NAS) method. The motivation behind DSO-NAS is to address the task in the view of model pruning. To achieve this goal, we start from a completely connected block, and then introduce scaling factors to scale the information flow between operations. Next, sparse regularizations are imposed to prune useless connections in the architecture. Lastly, an efficient and theoretically sound optimization method is derived to solve it. Our method enjoys both advantages of differentiability and efficiency, therefore it can be directly applied to large datasets like ImageNet and tasks beyond classification. Particularly, on the CIFAR-10 dataset, DSO-NAS achieves an average test error 2.74 percent, while on the ImageNet dataset DSO-NAS achieves 25.4 percent test error under 600M FLOPs with 8 GPUs in 18 hours. As for semantic segmentation task, DSO-NAS also achieve competitive result compared with manually designed architectures on the PASCAL VOC dataset. Code is available at https://github.com/XinbangZhang/DSO-NAS.;

Recently neural architecture search (NAS) has raised great interest in both academia and industry. However, it remains challenging because of its huge and non-continuous search space. Instead of applying evolutionary algorithm or reinforcement learning as previous works, this paper proposes a direct sparse optimization NAS (DSO-NAS) method. The motivation behind DSO-NAS is to address the task in the view of model pruning. To achieve this goal, we start from a completely connected block, and then introduce scaling factors to scale the information flow between operations. Next, sparse regularizations are imposed to prune useless connections in the architecture. Lastly, an efficient and theoretically sound optimization method is derived to solve it. Our method enjoys both advantages of differentiability and efficiency, therefore it can be directly applied to large datasets like ImageNet and tasks beyond classification. Particularly, on the CIFAR-10 dataset, DSO-NAS achieves an average test error 2.74 percent, while on the ImageNet dataset DSO-NAS achieves 25.4 percent test error under 600M FLOPs with 8 GPUs in 18 hours. As for semantic segmentation task, DSO-NAS also achieve competitive result compared with manually designed architectures on the PASCAL VOC dataset. Code is available at https://github.com/XinbangZhang/DSO-NAS.

WOS关键词NETWORKS ; NETWORKS ; ALGORITHM ; GAME ; GO ; ALGORITHM ; GAME ; GO
资助项目Major Project for New Generation of AI[2018AAA0100400] ; Major Project for New Generation of AI[2018AAA0100400] ; National Natural Science Foundation of China[91646207] ; National Natural Science Foundation of China[61976208] ; National Natural Science Foundation of China[91646207] ; National Natural Science Foundation of China[61976208]
WOS研究方向Computer Science ; Computer Science ; Engineering ; Engineering
语种英语 ; 英语
WOS记录号WOS:000681124300006 ; WOS:000681124300006
出版者IEEE COMPUTER SOC ; IEEE COMPUTER SOC
资助机构Major Project for New Generation of AI ; Major Project for New Generation of AI ; National Natural Science Foundation of China ; National Natural Science Foundation of China
源URL[http://ir.ia.ac.cn/handle/173211/45631]  
专题自动化研究所_模式识别国家重点实验室_遥感图像处理团队
通讯作者Zhang, Xinbang
作者单位1.Chinese Acad Sci, Inst Automat, Dept Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
3.Tusimple, Beijing 100020, Peoples R China
推荐引用方式
GB/T 7714
Zhang, Xinbang,Huang, Zehao,Wang, Naiyan,et al. You Only Search Once: Single Shot Neural Architecture Search via Direct Sparse Optimization, You Only Search Once: Single Shot Neural Architecture Search via Direct Sparse Optimization[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2021, 2021,43, 43(9):2891-2904, 2891-2904.
APA Zhang, Xinbang,Huang, Zehao,Wang, Naiyan,Xiang, Shiming,&Pan, Chunhong.(2021).You Only Search Once: Single Shot Neural Architecture Search via Direct Sparse Optimization.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,43(9),2891-2904.
MLA Zhang, Xinbang,et al."You Only Search Once: Single Shot Neural Architecture Search via Direct Sparse Optimization".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 43.9(2021):2891-2904.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。