Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking
文献类型:会议论文
作者 | Qiang Wang2,4; Zhu Teng3; Junliang Xing2; Jin Gao2; Weiming Hu2; Stephen Maybank1; Hu, Weiming![]() ![]() ![]() ![]() |
出版日期 | 2018-06 |
会议日期 | 2018-7 |
会议地点 | Salt Lake City, Utah, USA |
英文摘要 | Offline training for object tracking has recently shown great potentials in balancing tracking accuracy and speed. However, it is still difficult to adapt an offline trained model to a target tracked online. This work presents a Residual Attentional Siamese Network (RASNet) for high performance object tracking. The RASNet model reformulates the correlation filter within a Siamese tracking framework, and introduces different kinds of the attention mechanisms to adapt the model without updating the model online. In particular, by exploiting the offline trained general attention, the target adapted residual attention, and the channel favored feature attention, the RASNet not only mitigates the over-fitting problem in deep network training, but also enhances its discriminative capacity and adaptability due to the separation of representation learning and discriminator learning. The proposed deep architecture is trained from end to end and takes full advantage of the rich spatial temporal information to achieve robust visual tracking. Experimental results on two latest benchmarks, OTB-2015 and VOT2017, show that the RASNet tracker has the state-of-the-art tracking accuracy while runs at more than 80 frames per second.
|
源URL | [http://ir.ia.ac.cn/handle/173211/39070] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_视频内容安全团队 中国科学院自动化研究所 |
作者单位 | 1.Department of Computer Science and Information Systems, Birkbeck College, University of London, UK 2.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China 3.School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China 4.University of Chinese Academy of Sciences, Beijing, China |
推荐引用方式 GB/T 7714 | Qiang Wang,Zhu Teng,Junliang Xing,et al. Learning Attentions: Residual Attentional Siamese Network for High Performance Online Visual Tracking[C]. 见:. Salt Lake City, Utah, USA. 2018-7. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。