|
作者 | Wang, Tong2,3 ; Zhu, Yousong1,2 ; Zhao, Chaoyang2 ; Zhao, Xu2 ; Wang, Jinqiao2,3,4 ; Tang, Ming2
|
出版日期 | 2021-07
|
会议日期 | 2021-7-5
|
会议地点 | Online
|
英文摘要 | Knowledge distillation has been successfully applied in image classification for model acceleration. There are also
some works employing this technique to object detection, but
they all treat different feature regions equally when performing feature mimic. In this paper, we propose an end-to-end
attention-guided knowledge distillation method to train efficient single-stage detectors with much smaller backbones.
More specifically, we introduce an attention mechanism to
prioritize the transfer of important knowledge by focusing on
a sparse set of hard samples, leading to a more thorough distillation process. In addition, the proposed distillation method
also provides an easy way to train efficient detectors without
tedious ImageNet pre-training procedure. Extensive experiments on PASCAL VOC and CityPersons datasets demonstrate the effectiveness of the proposed approach. We achieve
57.96% and 69.48% mAP on VOC07 with the backbone of
1/8 VGG16 and 1/4 VGG16, greatly outperforming their ImageNet pre-trained counterparts by 11.7% and 7.1% respectively.
|
源URL | [http://ir.ia.ac.cn/handle/173211/47417]  |
专题 | 自动化研究所_模式识别国家重点实验室_图像与视频分析团队
|
作者单位 | 1.ObjectEye Inc., Beijing, China 2.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China 3.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 4.NEXWISE Co., Ltd, Guangzhou, China
|
推荐引用方式 GB/T 7714 |
Wang, Tong,Zhu, Yousong,Zhao, Chaoyang,et al. ATTENTION-GUIDED KNOWLEDGE DISTILLATION FOR EFFICIENT SINGLE-STAGE DETECTOR[C]. 见:. Online. 2021-7-5.
|