Dual-evidential learning for weakly-supervised temporal action localization
文献类型:会议论文
作者 | Chen, Mengyuan3,4; Gao, Junyu3,4![]() ![]() |
出版日期 | 2022 |
会议日期 | 2022-10-23 |
会议地点 | Tel Aviv, Israel |
英文摘要 | Weakly-supervised temporal action localization (WS-TAL) aims to localize the action instances and recognize their categories with only video-level labels. Despite great progress, existing methods suffer from severe action-background ambiguity, which mainly comes from background noise introduced by aggregation operations and large intra-action variations caused by the task gap between classification and localization. To address this issue, we propose a generalized evidential deep learning (EDL) framework for WS-TAL, called Dual-Evidential Learning for Uncertainty modeling (DELU), which extends the traditional paradigm of EDL to adapt to the weakly-supervised multi-label classification goal. Specifically, targeting at adaptively excluding the undesirable background snippets, we utilize the video-level uncertainty to measure the interference of background noise to video-level prediction. Then, the snippet-level uncertainty is further deduced for progressive learning, which gradually focuses on the entire action instances in an “easy-to-hard” manner. Extensive experiments show that DELU achieves state-of-the-art performance on THUMOS14 and ActivityNet1.2 benchmarks. Our code is available in github.com/MengyuanChen21/ECCV2022-DELU. |
源URL | [http://ir.ia.ac.cn/handle/173211/51521] ![]() |
专题 | 多模态人工智能系统全国重点实验室 |
作者单位 | 1.Peng Cheng Laboratory 2.Hikvision Research Institute 3.School of Artificial Intelligence, University of Chinese Academy of Sciences 4.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Chen, Mengyuan,Gao, Junyu,Yang, Shicai,et al. Dual-evidential learning for weakly-supervised temporal action localization[C]. 见:. Tel Aviv, Israel. 2022-10-23. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。