中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Cascaded Decoding and Multi-Stage Inference for Spatio-Temporal Video Grounding

文献类型:会议论文

作者Li Yang1,3; Peixuan Wu1,3; Chunfeng Yuan1; Bing Li1; Weiming Hu1,2,3
出版日期2022-10
会议日期2022-10
会议地点Lisbon, Portugal
英文摘要

Human-centric spatio-temporal video grounding (HC-STVG) is a challenging task that aims to localize the spatio-temporal tube of the target person in a video based on a natural language description. In this report, we present our approach for this challenging HC-STVG task. Specifically, based on the TubeDETR framework, we propose two cascaded decoders to decouple spatial and temporal grounding, which allows the model to capture respective favorable features for these two grounding subtasks. We also devise a multi-stage inference strategy to reason about the target in a coarse-to-fine manner and thereby produce more precise grounding results for the target. To further improve accuracy, we propose a model ensemble strategy that incorporates the results of models with better performance in spatial or temporal grounding. We validated the effectiveness of our proposed method on the HC-STVG 2.0 dataset and won second place in the HC-STVG track of the 4th Person in Context (PIC) workshop at ACM MM 2022.

源URL[http://ir.ia.ac.cn/handle/173211/52323]  
专题自动化研究所_模式识别国家重点实验室_视频内容安全团队
通讯作者Chunfeng Yuan
作者单位1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
2.CAS Center for Excellence in Brain Science and Intelligence Technology
3.School of Artificial Intelligence, University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Li Yang,Peixuan Wu,Chunfeng Yuan,et al. Cascaded Decoding and Multi-Stage Inference for Spatio-Temporal Video Grounding[C]. 见:. Lisbon, Portugal. 2022-10.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。