Deep Neural Network Self-Distillation Exploiting Data Representation Invariance
文献类型:期刊论文
作者 | Xu, Ting-Bing1,4,5![]() ![]() |
刊名 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
![]() |
出版日期 | 2022 |
卷号 | 33期号:1页码:257-269 |
关键词 | Training Nonlinear distortion Data models Neural networks Knowledge engineering Network architecture Generalization error network compression representation invariance self-distillation (SD) |
ISSN号 | 2162-237X |
DOI | 10.1109/TNNLS.2020.3027634 |
通讯作者 | Liu, Cheng-Lin(liucl@nlpr.ia.ac.cn) |
英文摘要 | To harvest small networks with high accuracies, most existing methods mainly utilize compression techniques such as low-rank decomposition and pruning to compress a trained large model into a small network or transfer knowledge from a powerful large model (teacher) to a small network (student). Despite their success in generating small models of high performance, the dependence of accompanying assistive models complicates the training process and increases memory and time cost. In this article, we propose an elegant self-distillation (SD) mechanism to obtain high-accuracy models directly without going through an assistive model. Inspired by the invariant recognition in the human vision system, different distorted instances of the same input should possess similar high-level data representations. Thus, we can learn data representation invariance between different distorted versions of the same sample. Especially, in our learning algorithm based on SD, the single network utilizes the maximum mean discrepancy metric to learn the global feature consistency and the Kullback-Leibler divergence to constrain the posterior class probability consistency across the different distorted branches. Extensive experiments on MNIST, CIFAR-10/100, and ImageNet data sets demonstrate that the proposed method can effectively reduce the generalization error for various network architectures, such as AlexNet, VGGNet, ResNet, Wide ResNet, and DenseNet, and outperform existing model distillation methods with little extra training efforts. |
WOS关键词 | CONVOLUTIONAL NETWORKS |
资助项目 | Major Project for New Generation of Artificial Intelligence (AI)[2018AAA0100400] ; National Natural Science Foundation of China (NSFC)[61836014] ; National Natural Science Foundation of China (NSFC)[61721004] ; Ministry of Science and Technology of China |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:000739635300025 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
资助机构 | Major Project for New Generation of Artificial Intelligence (AI) ; National Natural Science Foundation of China (NSFC) ; Ministry of Science and Technology of China |
源URL | [http://ir.ia.ac.cn/handle/173211/47157] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_模式分析与学习团队 |
通讯作者 | Liu, Cheng-Lin |
作者单位 | 1.Chinese Acad Sci CASIA, Natl Lab Pattern Recognit NLPR, Inst Automat, Beijing 100190, Peoples R China 2.CAS Ctr Excellence Brain Sci & Intelligence Techn, Shanghai 200031, Peoples R China 3.Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China 4.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China 5.Beihang Univ, Sch Instrumentat Sci & Optoelect Engn, Beijing 100191, Peoples R China |
推荐引用方式 GB/T 7714 | Xu, Ting-Bing,Liu, Cheng-Lin. Deep Neural Network Self-Distillation Exploiting Data Representation Invariance[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2022,33(1):257-269. |
APA | Xu, Ting-Bing,&Liu, Cheng-Lin.(2022).Deep Neural Network Self-Distillation Exploiting Data Representation Invariance.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,33(1),257-269. |
MLA | Xu, Ting-Bing,et al."Deep Neural Network Self-Distillation Exploiting Data Representation Invariance".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 33.1(2022):257-269. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。