|
作者 | Chen,Zhiyang1,3 ; Zhu, Yousong1 ; Zhao,Chaoyang1 ; Hu, Guosheng2; Zeng, Wei4; Wang, Jinqiao1,3 ; Tang, Ming1
|
出版日期 | 2021-10
|
会议日期 | 2021-10-20
|
会议地点 | Chengdu, China
|
英文摘要 | Transformer has achieved great success in computer vision, while
how to split patches in an image remains a problem. Existing methods usually use a fixed-size patch embedding which might destroy
the semantics of objects. To address this problem, we propose a new
Deformable Patch (DePatch) module which learns to adaptively
split the images into patches with different positions and scales in
a data-driven way rather than using predefined fixed patches. In
this way, our method can well preserve the semantics in patches.
The DePatch module can work as a plug-and-play module, which
can easily be incorporated into different transformers to achieve
an end-to-end training. We term this DePatch-embedded transformer as Deformable Patch-based Transformer (DPT) and conduct
extensive evaluations of DPT on image classification and object
detection. Results show DPT can achieve 81.9% top-1 accuracy on
ImageNet classification, and 43.7% box mAP with RetinaNet, 44.3%
with Mask R-CNN on MSCOCO object detection. Code has been
made available at: https://github.com/CASIA-IVA-Lab/DPT.
|
源URL | [http://ir.ia.ac.cn/handle/173211/47414]  |
专题 | 自动化研究所_模式识别国家重点实验室_图像与视频分析团队
|
通讯作者 | Zhao,Chaoyang |
作者单位 | 1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2.AnyVision, Belfast, UK 3.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China 4.Peking University, Beijing, China
|
推荐引用方式 GB/T 7714 |
Chen,Zhiyang,Zhu, Yousong,Zhao,Chaoyang,et al. DPT: Deformable Patch-based Transformer for Visual Recognition[C]. 见:. Chengdu, China. 2021-10-20.
|