中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Stacked BNAS: Rethinking Broad Convolutional Neural Network for Neural Architecture Search

文献类型:期刊论文

作者Zixiang, Ding1,4; Yaran, Chen1,4; Nannan, Li1,4; Dongbin, Zhao1,4; C.L.Philip Chen,2,3
刊名IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
出版日期2022
卷号0期号:0页码:0
关键词broad neural architecture search, stacked broad convolutional neural network, knowledge embedding search, image classification.
英文摘要

Different from other deep scalable architecturebased
NAS approaches, Broad Neural Architecture Search
(BNAS) proposes a broad scalable architecture which consists
of convolution and enhancement blocks, dubbed Broad Convolutional
Neural Network (BCNN), as the search space for
amazing efficiency improvement. BCNN reuses the topologies
of cells in the convolution block so that BNAS can employ
few cells for efficient search. Moreover, multi-scale feature
fusion and knowledge embedding are proposed to improve the
performance of BCNN with shallow topology. However, BNAS
suffers some drawbacks: 1) insufficient representation diversity
for feature fusion and enhancement and 2) time consumption of
knowledge embedding design by human experts. In this paper,
we propose Stacked BNAS, whose search space is a developed
broad scalable architecture named Stacked BCNN, with better
performance than BNAS. On the one hand, Stacked BCNN
treats mini BCNN as a basic block to preserve comprehensive
representation and deliver powerful feature extraction ability.
For multi-scale feature enhancement, each mini BCNN feeds
the outputs of deep and broad cells to the enhancement cell.
For multi-scale feature fusion, each mini BCNN feeds the
outputs of deep, broad and enhancement cells to the output
node. On the other hand, we propose Knowledge Embedding
Search (KES) to learn appropriate knowledge embeddings in a
differentiable way. Moreover, the basic unit of KES is an overparameterized
knowledge embedding module that consists of all
possible candidate knowledge embeddings. Experimental results
show that 1) Stacked BNAS obtains better performance than
BNAS-v2 on both CIFAR-10 and ImageNet, 2) the proposed KES
algorithm contributes to reducing the parameters of the learned
architecture with satisfactory performance, and 3) Stacked BNAS
delivers a state-of-the-art efficiency of 0.02 GPU days.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/46596]  
专题复杂系统管理与控制国家重点实验室_深度强化学习
通讯作者Dongbin, Zhao
作者单位1.the State Key Laboratory of Management and Control for Complex Systems,Institute of Automation, Chinese Academy of Sciences
2.the College of Navigation, Dalian Maritime University
3.the School of Computer Science & Engineering, South China University of Technology
4.the School of Artificial Intelligence, University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Zixiang, Ding,Yaran, Chen,Nannan, Li,et al. Stacked BNAS: Rethinking Broad Convolutional Neural Network for Neural Architecture Search[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2022,0(0):0.
APA Zixiang, Ding,Yaran, Chen,Nannan, Li,Dongbin, Zhao,&C.L.Philip Chen,.(2022).Stacked BNAS: Rethinking Broad Convolutional Neural Network for Neural Architecture Search.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,0(0),0.
MLA Zixiang, Ding,et al."Stacked BNAS: Rethinking Broad Convolutional Neural Network for Neural Architecture Search".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 0.0(2022):0.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。