中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search

文献类型:期刊论文

作者Zixiang, Ding1,2; Yaran, Chen1,2; Nannan, Li1,2; Dongbin, Zhao1,2
刊名IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS
出版日期2022-01
卷号0期号:0页码:0
关键词Broad neural architecture search (BNAS), continuous relaxation, confident learning rate, partial channel connections, image classification.
英文摘要

In this paper, we propose BNAS-v2 to further
improve the efficiency of BNAS, which employs Broad Convolutional
Neural Network (BCNN) as the search space. In BNAS, the
single-path sampling-updating strategy of an over-parameterized
BCNN leads to terrible unfair training issue, which restricts the
efficiency improvement. To mitigate the unfair training issue,
we employ a continuous relaxation strategy to optimize all
paths of the over-parameterized BCNN simultaneously. However,
continuous relaxation leads to a performance collapse issue
that leads to unsatisfactory performance of the learned BCNN.
For that, we propose the Confident Learning Rate (CLR), and
introduce the combination of partial channel connections and
edge normalization. Experimental results show that 1) BNAS-v2
delivers state-of-the-art search efficiency on both CIFAR-10 (0.05
GPU days, which is 4 faster than BNAS) and ImageNet (0.19
GPU days) with better or competitive performance; 2) the above
two solutions are effective alleviating the performance collapse
issue; and 3) BNAS-v2 achieves powerful generalization ability
on multiple transfer tasks, e.g., MNIST, FashionMNIST, NORB
and SVHN.

语种英语
WOS记录号WOS:000750216400001
源URL[http://ir.ia.ac.cn/handle/173211/46597]  
专题复杂系统管理与控制国家重点实验室_深度强化学习
通讯作者Dongbin, Zhao
作者单位1.the State Key Laboratory of Management and Control for Complex Systems,Institute of Automation, Chinese Academy of Sciences
2.the School of Artificial Intelligence, University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Zixiang, Ding,Yaran, Chen,Nannan, Li,et al. BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search[J]. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS,2022,0(0):0.
APA Zixiang, Ding,Yaran, Chen,Nannan, Li,&Dongbin, Zhao.(2022).BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search.IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS,0(0),0.
MLA Zixiang, Ding,et al."BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search".IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS 0.0(2022):0.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。