Temporal Context Enhanced Feature Aggregation for Video Object Detection
文献类型:会议论文
作者 | He, Fei3,4![]() ![]() ![]() ![]() ![]() |
出版日期 | 2020-02 |
会议日期 | 2020-02 |
会议地点 | New York |
国家 | US |
英文摘要 | Video object detection is a challenging task because of the presence of appearance deterioration in certain video frames. One typical solution is to aggregate neighboring features to enhance per-frame appearance features. However, such a method ignores the temporal relations between the aggregated frames, which is critical for improving video recognition accuracy. To handle the appearance deterioration problem, this paper proposes a temporal context enhanced network (TCENet) to exploit temporal context information by temporal aggregation for video object detection. To handle the displacement of the objects in videos, a novel DeformAlign module is proposed to align the spatial features from frame to frame. Instead of adopting a fixed-length window fusion strategy, a temporal stride predictor is proposed to adaptively select video frames for aggregation, which facilitates exploiting variable temporal information and requiring fewer video frames for aggregation to achieve better results. Our TCENet achieves state-of-the-art performance on the ImageNet VID dataset and has a faster runtime. Without bells-and-whistles, our TCENet achieves 80.3% mAP by only aggregating 3 frames. |
会议录 | The Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20)
![]() |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/48736] ![]() |
专题 | 智能系统与工程 |
作者单位 | 1.Horizon Robotics, Inc. 2.CAS Center for Excellence in Brain Science and Intelligence Technology 3.University of Chinese Academy of Sciences 4.CRISE, Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | He, Fei,Gao, Naiyu,Li, Qiaozhe,et al. Temporal Context Enhanced Feature Aggregation for Video Object Detection[C]. 见:. New York. 2020-02. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。