中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Cross-Architecture Knowledge Distillation

文献类型:会议论文

作者Yufan Liu3,4; Jiajiong Cao5; Bing Li1,3; Weiming Hu2,3,4; Jingting Ding5; Liang Li5
出版日期2022-12
会议日期2022.12.4-2022.12.8
会议地点Macau SAR, China
英文摘要

Transformer attracts much attention because of its ability to learn global relations and superior performance. In order to achieve higher performance, it is natural to distill complementary knowledge from Transformer to convolutional neural network (CNN). However, most existing knowledge distillation methods only consider homologous-architecture distillation, such as distilling knowledge from CNN to CNN. They may not be suitable when applying to cross-architecture scenarios, such as from Transformer to CNN. To deal with this problem, a novel cross-architecture knowledge distillation method is proposed. Specifically, instead of directly mimicking output/intermediate features of the teacher, partially cross attention projector and group-wise linear projector are introduced to align the student features with the teacher's in two projected feature spaces. And a multi-view robust training scheme is further presented to improve the robustness and stability of the framework. Extensive experiments show that the proposed method outperforms 14 state-of-the-arts on both small-scale and large-scale datasets.

源URL[http://ir.ia.ac.cn/handle/173211/51486]  
专题自动化研究所_模式识别国家重点实验室_视频内容安全团队
通讯作者Bing Li
作者单位1.PeopleAI, Inc.
2.CAS Center for Excellence in Brain Science and Intelligence Technology
3.Institute of Automation Chinese Academy of Sciences
4.School of Artificial Intelligence, University of Chinese Academy of Sciences
5.Ant Financial Service Group
推荐引用方式
GB/T 7714
Yufan Liu,Jiajiong Cao,Bing Li,et al. Cross-Architecture Knowledge Distillation[C]. 见:. Macau SAR, China. 2022.12.4-2022.12.8.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。