中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Deep Neural Network Self-Distillation Exploiting Data Representation Invariance

文献类型:期刊论文

作者Xu, Ting-Bing1,4,5; Liu, Cheng-Lin2,3,4
刊名IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
出版日期2022
卷号33期号:1页码:257-269
关键词Training Nonlinear distortion Data models Neural networks Knowledge engineering Network architecture Generalization error network compression representation invariance self-distillation (SD)
ISSN号2162-237X
DOI10.1109/TNNLS.2020.3027634
通讯作者Liu, Cheng-Lin(liucl@nlpr.ia.ac.cn)
英文摘要To harvest small networks with high accuracies, most existing methods mainly utilize compression techniques such as low-rank decomposition and pruning to compress a trained large model into a small network or transfer knowledge from a powerful large model (teacher) to a small network (student). Despite their success in generating small models of high performance, the dependence of accompanying assistive models complicates the training process and increases memory and time cost. In this article, we propose an elegant self-distillation (SD) mechanism to obtain high-accuracy models directly without going through an assistive model. Inspired by the invariant recognition in the human vision system, different distorted instances of the same input should possess similar high-level data representations. Thus, we can learn data representation invariance between different distorted versions of the same sample. Especially, in our learning algorithm based on SD, the single network utilizes the maximum mean discrepancy metric to learn the global feature consistency and the Kullback-Leibler divergence to constrain the posterior class probability consistency across the different distorted branches. Extensive experiments on MNIST, CIFAR-10/100, and ImageNet data sets demonstrate that the proposed method can effectively reduce the generalization error for various network architectures, such as AlexNet, VGGNet, ResNet, Wide ResNet, and DenseNet, and outperform existing model distillation methods with little extra training efforts.
WOS关键词CONVOLUTIONAL NETWORKS
资助项目Major Project for New Generation of Artificial Intelligence (AI)[2018AAA0100400] ; National Natural Science Foundation of China (NSFC)[61836014] ; National Natural Science Foundation of China (NSFC)[61721004] ; Ministry of Science and Technology of China
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:000739635300025
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构Major Project for New Generation of Artificial Intelligence (AI) ; National Natural Science Foundation of China (NSFC) ; Ministry of Science and Technology of China
源URL[http://ir.ia.ac.cn/handle/173211/47157]  
专题自动化研究所_模式识别国家重点实验室_模式分析与学习团队
通讯作者Liu, Cheng-Lin
作者单位1.Chinese Acad Sci CASIA, Natl Lab Pattern Recognit NLPR, Inst Automat, Beijing 100190, Peoples R China
2.CAS Ctr Excellence Brain Sci & Intelligence Techn, Shanghai 200031, Peoples R China
3.Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
4.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
5.Beihang Univ, Sch Instrumentat Sci & Optoelect Engn, Beijing 100191, Peoples R China
推荐引用方式
GB/T 7714
Xu, Ting-Bing,Liu, Cheng-Lin. Deep Neural Network Self-Distillation Exploiting Data Representation Invariance[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2022,33(1):257-269.
APA Xu, Ting-Bing,&Liu, Cheng-Lin.(2022).Deep Neural Network Self-Distillation Exploiting Data Representation Invariance.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,33(1),257-269.
MLA Xu, Ting-Bing,et al."Deep Neural Network Self-Distillation Exploiting Data Representation Invariance".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 33.1(2022):257-269.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。