中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Multi-Modality Self-Distillation for Weakly Supervised Temporal Action Localization

文献类型:期刊论文

作者Huang, Linjiang1,3; Wang, Liang2; Li, Hongsheng1,3
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING
出版日期2022
卷号31页码:1504-1519
关键词Location awareness Reliability Noise measurement Annotations Training Head Task analysis Weakly supervised temporal action localization multi-modality pseudo label self-distillation
ISSN号1057-7149
DOI10.1109/TIP.2021.3137649
通讯作者Li, Hongsheng(hsli@ee.cuhk.edu.hk)
英文摘要As a challenging task of high-level video understanding, Weakly-supervised Temporal Action Localization (WTAL) has attracted increasing attention in recent years. However, due to the weak supervisions of whole-video classification labels, it is challenging to accurately determine action instance boundaries. To address this issue, pseudo-label-based methods [Alwassel et al. (2019), Luo et al. (2020), and Zhai et al. (2020)] were proposed to generate snippet-level pseudo labels from classification results. In spite of the promising performance, these methods hardly take full advantages of multiple modalities, i.e., RGB and optical flow sequences, to generate high quality pseudo labels. Most of them ignored how to mitigate the label noise, which hinders the capability of the network on learning discriminative feature representations. To address these challenges, we propose a Multi-Modality Self-Distillation (MMSD) framework, which contains two single-modal streams and a fused-modal stream to perform multi-modality knowledge distillation and multi-modality self-voting. On the one hand, multi-modality knowledge distillation improves snippet-level classification performance by transferring knowledge between single-modal streams and a fused-modal stream. On the other hand, multi-modality self-voting mitigates the label noise in a modality voting manner according to the reliability and complementarity of the streams. Experimental results on THUMOS14 and ActivityNet1.3 datasets demonstrate the effectiveness of our method and superior performance over state-of-the-art approaches. Our code is available at https://github.com/LeonHLJ/MMSD.
资助项目Centre for Perceptual and Interactive Intelligence Ltd. ; Research Grants Council of Hong Kong[14204021] ; Research Grants Council of Hong Kong[14208417] ; Research Grants Council of Hong Kong[14207319] ; Chinese University of Hong Kong (CUHK) Strategic Fund
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:000748370500006
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构Centre for Perceptual and Interactive Intelligence Ltd. ; Research Grants Council of Hong Kong ; Chinese University of Hong Kong (CUHK) Strategic Fund
源URL[http://ir.ia.ac.cn/handle/173211/47337]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Li, Hongsheng
作者单位1.Chinese Univ Hong Kong, Multimedia Lab, Hong Kong, Peoples R China
2.Chinese Acad Sci CASIA, Inst Automat, Beijing 100190, Peoples R China
3.Ctr Perceptual & Interact Intelligence CPII, Hong Kong, Peoples R China
推荐引用方式
GB/T 7714
Huang, Linjiang,Wang, Liang,Li, Hongsheng. Multi-Modality Self-Distillation for Weakly Supervised Temporal Action Localization[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2022,31:1504-1519.
APA Huang, Linjiang,Wang, Liang,&Li, Hongsheng.(2022).Multi-Modality Self-Distillation for Weakly Supervised Temporal Action Localization.IEEE TRANSACTIONS ON IMAGE PROCESSING,31,1504-1519.
MLA Huang, Linjiang,et al."Multi-Modality Self-Distillation for Weakly Supervised Temporal Action Localization".IEEE TRANSACTIONS ON IMAGE PROCESSING 31(2022):1504-1519.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。