Uncertainty-Aware Dual-Evidential Learning for Weakly-Supervised Temporal Action Localization
文献类型:期刊论文
作者 | Chen, Mengyuan2,3; Gao, Junyu2,3![]() ![]() |
刊名 | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
![]() |
出版日期 | 2023-12-01 |
卷号 | 45期号:12页码:15896-15911 |
关键词 | Uncertainty Background noise Task analysis Location awareness Measurement uncertainty Interference Predictive models Weakly-supervised temporal action localization evidential deep learning uncertainty estimation |
ISSN号 | 0162-8828 |
DOI | 10.1109/TPAMI.2023.3308571 |
通讯作者 | Xu, Changsheng(csxu@nlpr.ia.ac.cn) |
英文摘要 | Weakly-supervised temporal action localization (WTAL) aims to localize the action instances and recognize their categories with only video-level labels. Despite great progress, existing methods suffer from severe action-background ambiguity, which mainly arises from background noise and neglect of non-salient action snippets. To address this issue, we propose a generalized evidential deep learning (EDL) framework forWTAL, called Uncertainty-aware Dual-Evidential Learning (UDEL), which extends the traditional paradigm of EDL to adapt to the weakly-supervised multi-label classification goal with the guidance of epistemic and aleatoric uncertainties, of which the former comes from models lacking knowledge, while the latter comes from the inherent properties of samples themselves. Specifically, targeting excluding the undesirable background snippets, we fuse the video-level epistemic and aleatoric uncertainties to measure the interference of background noise to video-level prediction. Then, the snippet-level aleatoric uncertainty is further deduced for progressive mutual learning, which gradually focuses on the entire action instances in an "easy-to-hard" manner and encourages the snippet-level epistemic uncertainty to be complementary with the foreground attention scores. Extensive experiments show that UDEL achieves state-of-the-art performance on four public benchmarks. Our code is available in github/mengyuanchen2021/ UDEL. |
WOS关键词 | ATTENTION |
资助项目 | National Key Research and Development Plan of China[2020AAA0106200] ; National Natural Science Foundation of China[62036012] ; National Natural Science Foundation of China[U21B2044] ; National Natural Science Foundation of China[62236008] ; National Natural Science Foundation of China[61721004] ; National Natural Science Foundation of China[62102415] ; National Natural Science Foundation of China[62072286] ; National Natural Science Foundation of China[62106262] ; National Natural Science Foundation of China[62002355] ; Beijing Natural Science Foundation[L201001] ; Open Research Projects of Zhejiang Lab[2022RC0AB02] |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:001130146400114 |
出版者 | IEEE COMPUTER SOC |
资助机构 | National Key Research and Development Plan of China ; National Natural Science Foundation of China ; Beijing Natural Science Foundation ; Open Research Projects of Zhejiang Lab |
源URL | [http://ir.ia.ac.cn/handle/173211/55493] ![]() |
专题 | 多模态人工智能系统全国重点实验室 |
通讯作者 | Xu, Changsheng |
作者单位 | 1.Peng Cheng Lab, Shenzhen 518055, Peoples R China 2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 101408, Peoples R China 3.Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Chen, Mengyuan,Gao, Junyu,Xu, Changsheng. Uncertainty-Aware Dual-Evidential Learning for Weakly-Supervised Temporal Action Localization[J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2023,45(12):15896-15911. |
APA | Chen, Mengyuan,Gao, Junyu,&Xu, Changsheng.(2023).Uncertainty-Aware Dual-Evidential Learning for Weakly-Supervised Temporal Action Localization.IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,45(12),15896-15911. |
MLA | Chen, Mengyuan,et al."Uncertainty-Aware Dual-Evidential Learning for Weakly-Supervised Temporal Action Localization".IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 45.12(2023):15896-15911. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。