中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Deep Multi-Modality Adversarial Networks for Unsupervised Domain Adaptation

文献类型:期刊论文

作者Ma, Xinhong1,2,3; Zhang, Tianzhu1,2,3; Xu, Changsheng1,2,3
刊名IEEE TRANSACTIONS ON MULTIMEDIA
出版日期2019-09-01
卷号21期号:9页码:2419-2431
关键词Unsupervised domain adaptation triplet loss stacked attention multi-modality social event recognition
ISSN号1520-9210
DOI10.1109/TMM.2019.2902100
通讯作者Xu, Changsheng(csxu@nlpr.ia.ac.cn)
英文摘要Unsupervised domain adaptation aims to transfer domain knowledge from existing well-defined tasks to new ones where labels are unavailable. In the real-world applications, domain discrepancy is usually uncontrollable especially for multi-modality data. Therefore, it is significantly motivated to deal with a multi-modality domain adaptation task. As labels are unavailable in a target domain, how to learn semantic multi-modality representations and successfully adapt the classifier from a source to the target domain remain open challenges in a multi-modality domain adaptation task. To deal with these issues, we propose a multi-modality adversarial network (MMAN), which applies stacked attention to learn semantic multi-modality representations and reduces domain discrepancy via adversarial training. Unlike the previous domain adaptation methods, which cannot make full use of source domain categories information, multi-channel constraint is employed to capture fine-grained categories of knowledge that could enhance the discrimination of target samples and boost target performance on single-modality and multi-modality domain adaptation problems. We apply the proposed MMAN to two applications including cross-domain object recognition and cross-domain social event recognition. The extensive experimental evaluations demonstrate the effectiveness of the proposed model for unsupervised domain adaptation.
WOS关键词KERNEL ; SPARSE
资助项目National Natural Science Foundation of China[61432019] ; National Natural Science Foundation of China[61572498] ; National Natural Science Foundation of China[61532009] ; National Natural Science Foundation of China[61728210] ; National Natural Science Foundation of China[61721004] ; National Natural Science Foundation of China[61751211] ; National Natural Science Foundation of China[61772244] ; National Natural Science Foundation of China[61472379] ; National Natural Science Foundation of China[61720106006] ; National Natural Science Foundation of China[U1705262] ; Key Research Program of Frontier Sciences, Chinese Academy of Sciences[QYZDJ-SSW-JSC039] ; Beijing Natural Science Foundation[4172062] ; Youth Innovation Promotion Association Chinese Academy of Sciences[2018166]
WOS研究方向Computer Science ; Telecommunications
语种英语
WOS记录号WOS:000483015200021
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构National Natural Science Foundation of China ; Key Research Program of Frontier Sciences, Chinese Academy of Sciences ; Beijing Natural Science Foundation ; Youth Innovation Promotion Association Chinese Academy of Sciences
源URL[http://ir.ia.ac.cn/handle/173211/27232]  
专题自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队
通讯作者Xu, Changsheng
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
3.Peng Cheng Lab, Shenzhen, Peoples R China
推荐引用方式
GB/T 7714
Ma, Xinhong,Zhang, Tianzhu,Xu, Changsheng. Deep Multi-Modality Adversarial Networks for Unsupervised Domain Adaptation[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2019,21(9):2419-2431.
APA Ma, Xinhong,Zhang, Tianzhu,&Xu, Changsheng.(2019).Deep Multi-Modality Adversarial Networks for Unsupervised Domain Adaptation.IEEE TRANSACTIONS ON MULTIMEDIA,21(9),2419-2431.
MLA Ma, Xinhong,et al."Deep Multi-Modality Adversarial Networks for Unsupervised Domain Adaptation".IEEE TRANSACTIONS ON MULTIMEDIA 21.9(2019):2419-2431.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。