Siamese Deformable Cross-Correlation Network for Real-Time Visual Tracking
文献类型:期刊论文
作者 | Zheng, Linyu1,4,5; Chen, Yingying1,2,3,4,5; Tang, Ming1,4,5; Wang, Jinqiao1,4,5; Lu, Hanqing1,4,5 |
刊名 | NEUROCOMPUTING |
出版日期 | 2020-08-11 |
卷号 | 401页码:36-47 |
ISSN号 | 0925-2312 |
关键词 | Visual Tracking Convolutional Neural Networks Siamese network Deformable Convolutional Network |
DOI | 10.1016/j.neucom.2020.02.080 |
通讯作者 | Chen, Yingying(yingying.chen@nlpr.ia.ac.cn) |
英文摘要 | In recent years, SiamFC-based trackers have received much attention because of their great potentials in balancing tracking accuracy and speed. However, the robustness of most such trackers is greatly affected by the large deformations of targets. We argue that in the cross-correlation operation which is widely used by modern SiamFC-based trackers, the static correlation between the template kernel and the feature maps of test sample is difficult to adapt to the large deformation of the target object. In this paper, we propose a Siamese deformable cross-correlation network (SiamDCN), which introduces the deformable cross-correlation operation into SiamFC in an online self-adaptive way, for robust visual tracking. Compared to the previous SiamFC-based trackers, our SiamDCN is more robust to the large deformations of targets through dynamically and adaptively adjusting the location of correlation calculation for each element of the template kernel in the cross-correlation operation. Moreover, we build a twofold Siamese network, named SiamDCN+, which consists of a SiamDCN branch and a SiamFC branch, for accurate and real-time visual tracking after observing that the features learned in SiamFC are static and discriminative, whereas those in SiamDCN are dynamic and robust, and they complement each other. Extensive experiments on three public benchmarks, 0TB2015, V0T2016, and V0T2017, show that the proposed SiamDCN achieves superior localization accuracy than its baseline tracker SiamFC and the proposed SiamDCN+ achieves competitive performance compared to state-of-the-art real-time trackers, while running beyond 40 FPS. (C) 2020 Elsevier B.V. All rights reserved. |
WOS关键词 | OBJECT TRACKING ; ROBUST |
资助项目 | National Natural Science Foundation of China[61976210] ; National Natural Science Foundation of China[61876086] ; National Natural Science Foundation of China[61806200] ; National Natural Science Foundation of China[61772527] ; fund of the key laboratory of rich-media knowledge organization and service of digital publishing content[ZD2019-10/01] ; Research and Development Projects in the Key Areas of Guangdong Province[2019B010153001] |
WOS研究方向 | Computer Science |
语种 | 英语 |
出版者 | ELSEVIER |
WOS记录号 | WOS:000544725700004 |
资助机构 | National Natural Science Foundation of China ; fund of the key laboratory of rich-media knowledge organization and service of digital publishing content ; Research and Development Projects in the Key Areas of Guangdong Province |
源URL | [http://ir.ia.ac.cn/handle/173211/40062] |
专题 | 自动化研究所_模式识别国家重点实验室_图像与视频分析团队 |
通讯作者 | Chen, Yingying |
作者单位 | 1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China 2.Inst Sci & Tech Informat China, Key Lab Rich Media Knowledge Org, Beijing 100038, Peoples R China 3.Inst Sci & Tech Informat China, Serv Digital Publishing Content, Beijing 100038, Peoples R China 4.Univ Chinese Acad Sci, Beijing 100049, Peoples R China 5.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, 95 Zhongguancun East Rd, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Zheng, Linyu,Chen, Yingying,Tang, Ming,et al. Siamese Deformable Cross-Correlation Network for Real-Time Visual Tracking[J]. NEUROCOMPUTING,2020,401:36-47. |
APA | Zheng, Linyu,Chen, Yingying,Tang, Ming,Wang, Jinqiao,&Lu, Hanqing.(2020).Siamese Deformable Cross-Correlation Network for Real-Time Visual Tracking.NEUROCOMPUTING,401,36-47. |
MLA | Zheng, Linyu,et al."Siamese Deformable Cross-Correlation Network for Real-Time Visual Tracking".NEUROCOMPUTING 401(2020):36-47. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。