中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Dual-evidential learning for weakly-supervised temporal action localization

文献类型:会议论文

作者Chen, Mengyuan3,4; Gao, Junyu3,4; Yang, Shicai2; Xu, Changsheng1,3,4
出版日期2022
会议日期2022-10-23
会议地点Tel Aviv, Israel
英文摘要

Weakly-supervised temporal action localization (WS-TAL) aims to localize the action instances and recognize their categories with only video-level labels. Despite great progress, existing methods suffer from severe action-background ambiguity, which mainly comes from background noise introduced by aggregation operations and large intra-action variations caused by the task gap between classification and localization. To address this issue, we propose a generalized evidential deep learning (EDL) framework for WS-TAL, called Dual-Evidential Learning for Uncertainty modeling (DELU), which extends the traditional paradigm of EDL to adapt to the weakly-supervised multi-label classification goal. Specifically, targeting at adaptively excluding the undesirable background snippets, we utilize the video-level uncertainty to measure the interference of background noise to video-level prediction. Then, the snippet-level uncertainty is further deduced for progressive learning, which gradually focuses on the entire action instances in an “easy-to-hard” manner. Extensive experiments show that DELU achieves state-of-the-art performance on THUMOS14 and ActivityNet1.2 benchmarks. Our code is available in github.com/MengyuanChen21/ECCV2022-DELU.

源URL[http://ir.ia.ac.cn/handle/173211/51521]  
专题多模态人工智能系统全国重点实验室
作者单位1.Peng Cheng Laboratory
2.Hikvision Research Institute
3.School of Artificial Intelligence, University of Chinese Academy of Sciences
4.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Chen, Mengyuan,Gao, Junyu,Yang, Shicai,et al. Dual-evidential learning for weakly-supervised temporal action localization[C]. 见:. Tel Aviv, Israel. 2022-10-23.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。