中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
ATTENTION-GUIDED KNOWLEDGE DISTILLATION FOR EFFICIENT SINGLE-STAGE DETECTOR

文献类型:会议论文

作者Wang, Tong2,3; Zhu, Yousong1,2; Zhao, Chaoyang2; Zhao, Xu2; Wang, Jinqiao2,3,4; Tang, Ming2
出版日期2021-07
会议日期2021-7-5
会议地点Online
英文摘要

Knowledge distillation has been successfully applied in image classification for model acceleration. There are also
some works employing this technique to object detection, but
they all treat different feature regions equally when performing feature mimic. In this paper, we propose an end-to-end
attention-guided knowledge distillation method to train efficient single-stage detectors with much smaller backbones.
More specifically, we introduce an attention mechanism to
prioritize the transfer of important knowledge by focusing on
a sparse set of hard samples, leading to a more thorough distillation process. In addition, the proposed distillation method
also provides an easy way to train efficient detectors without
tedious ImageNet pre-training procedure. Extensive experiments on PASCAL VOC and CityPersons datasets demonstrate the effectiveness of the proposed approach. We achieve
57.96% and 69.48% mAP on VOC07 with the backbone of
1/8 VGG16 and 1/4 VGG16, greatly outperforming their ImageNet pre-trained counterparts by 11.7% and 7.1% respectively.
 

源URL[http://ir.ia.ac.cn/handle/173211/47417]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
作者单位1.ObjectEye Inc., Beijing, China
2.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
3.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
4.NEXWISE Co., Ltd, Guangzhou, China
推荐引用方式
GB/T 7714
Wang, Tong,Zhu, Yousong,Zhao, Chaoyang,et al. ATTENTION-GUIDED KNOWLEDGE DISTILLATION FOR EFFICIENT SINGLE-STAGE DETECTOR[C]. 见:. Online. 2021-7-5.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。