中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture

文献类型:期刊论文

作者Ding ZX(丁子祥)4,5; Yaran, Chen4,5; Nannan, Li4,5; Dingbin, Zhao4,5; Zhiquan, Sun3; C. L. Philip Chen1,2
刊名IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
出版日期2020-03
期号0页码:0
关键词Broad convolutional neural network (BCNN), image classification, neural architecture search (NAS), reinforcement learning (RL)
英文摘要

Efficient neural architecture search (ENAS)
achieves novel efficiency for learning architecture with
high-performance via parameter sharing and reinforcement
learning (RL). In the phase of architecture search, ENAS employs
deep scalable architecture as search space whose training process
consumes most of the search cost. Moreover, time-consuming
model training is proportional to the depth of deep scalable
architecture. Through experiments using ENAS on CIFAR-10,
we find that layer reduction of scalable architecture is an effective
way to accelerate the search process of ENAS but suffers from
a prohibitive performance drop in the phase of architecture
estimation. In this article, we propose a broad neural architecture
search (BNAS) where we elaborately design broad scalable architecture
dubbed broad convolutional neural network (BCNN) to
solve the above issue. On the one hand, the proposed broad scalable
architecture has fast training speed due to its shallow topology.
Moreover, we also adopt RL and parameter sharing used in
ENAS as the optimization strategy of BNAS. Hence, the proposed
approach can achieve higher search efficiency. On the other hand,
the broad scalable architecture extracts multi-scale features and
enhancement representations, and feeds them into global average
pooling (GAP) layer to yield more reasonable and comprehensive
representations. Therefore, the performance of broad scalable
architecture can be promised. In particular, we also develop
two variants for BNAS that modify the topology of BCNN.
In order to verify the effectiveness of BNAS, several experiments
are performed and experimental results show that 1) BNAS
delivers 0.19 days which is 2.37× less expensive than ENAS who
ranks the best in RL-based NAS approaches; 2) compared with
small-size (0.5 million parameters) and medium-size (1.1 million
parameters) models, the architecture learned by BNAS obtains
state-of-the-art performance (3.58% and 3.24% test error) on
CIFAR-10; and 3) the learned architecture achieves 25.3%
top-1 error on ImageNet just using 3.9 million parameters.

语种英语
WOS记录号WOS:000732345100001
源URL[http://ir.ia.ac.cn/handle/173211/46583]  
专题复杂系统管理与控制国家重点实验室_深度强化学习
通讯作者Dingbin, Zhao
作者单位1.the College of Navigation, Dalian Maritime University
2.the School of Computer Science & Engineering, South China University of Technology
3.the School of Automation and Electric Engineering, University of Science and Technology Beijing
4.the School of Artificial Intelligence, University of Chinese Academy of Sciences
5.the State Key Laboratory of Management and Control for Complex Systems,Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Ding ZX,Yaran, Chen,Nannan, Li,et al. BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2020(0):0.
APA Ding ZX,Yaran, Chen,Nannan, Li,Dingbin, Zhao,Zhiquan, Sun,&C. L. Philip Chen.(2020).BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS(0),0.
MLA Ding ZX,et al."BNAS: Efficient Neural Architecture Search Using Broad Scalable Architecture".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS .0(2020):0.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。