中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Relative Alignment Network for Source-Free Multimodal Video Domain Adaptation

文献类型:会议论文

作者Huang Yi1,4; Yang Xiaoshan1,3,4; Zhang Ji2; Xu Changsheng1,3,4
出版日期2022-10
会议日期2022.10.10—2022.10.14
会议地点Lisboa, Portugal
英文摘要

Video domain adaptation aims to transfer knowledge from labeled source videos to unlabeled target videos. Existing video domain adaptation methods require full access to the source videos to reduce the domain gap between the source and target videos, which are impractical in real scenarios where the source videos are not available with concerns in transmission efficiency or privacy issues. To address this problem, in this paper, we propose to solve a source-free domain adaptation task for videos where only a pre-trained source model and unlabeled target videos are available for learning a multimodal video classification model. Existing source-free domain adaptation methods cannot be directly applied to this task, since videos always suffer from domain discrepancy along both the multimodal and temporal aspects, which brings difficulties in domain adaptation especially when the source data are unavailable. In this paper, we propose a Multimodal and Temporal Relative Alignment Network (MTRAN) to deal with the above challenges. To explicitly imitate the domain shifts contained in the multimodal information and the temporal dynamics of the source and target videos, we divide the target videos into two splits according to the self-entropy values of the classification results. The low-entropy videos are deemed to be source-like while the high-entropy videos are deemed to be target-like. Then, we adopt a self-entropy-guided MixUp strategy to generate synthetic samples and hypothetical samples as instance-level based on source-like and target-like videos, and push each synthetic sample to be similar with the corresponding hypothetical sample that is slightly closer to the source-like videos than the synthetic sample by multimodal and temporal relative alignment schemes. We evaluate the proposed model on four public video datasets. The results show that our model outperforms existing state-of-the-art methods.

会议录MM '22: Proceedings of the 30th ACM International Conference on Multimedia
源URL[http://ir.ia.ac.cn/handle/173211/52094]  
专题多模态人工智能系统全国重点实验室
自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队
通讯作者Xu Changsheng
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.DAMO Academy, Alibaba Group
3.Peng Cheng Laboratory
4.School of Artificial Intelligence, University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Huang Yi,Yang Xiaoshan,Zhang Ji,et al. Relative Alignment Network for Source-Free Multimodal Video Domain Adaptation[C]. 见:. Lisboa, Portugal. 2022.10.10—2022.10.14.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。