中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Source-Guided Target Feature Reconstruction for Cross-Domain Classification and Detection

文献类型:期刊论文

作者Jiao, Yifan1; Yao, Hantao2; Bao, Bing-Kun3; Xu, Changsheng2,4
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING
出版日期2024
卷号33页码:2808-2822
关键词Source-guided target feature reconstruction cross-domain image classification cross-domain object detection
ISSN号1057-7149
DOI10.1109/TIP.2024.3384766
通讯作者Xu, Changsheng(csxu@nlpr.ia.ac.cn)
英文摘要Existing cross-domain classification and detection methods usually apply a consistency constraint between the target sample and its self-augmentation for unsupervised learning without considering the essential source knowledge. In this paper, we propose a Source-guided Target Feature Reconstruction (STFR) module for cross-domain visual tasks, which applies source visual words to reconstruct the target features. Since the reconstructed target features contain the source knowledge, they can be treated as a bridge to connect the source and target domains. Therefore, using them for consistency learning can enhance the target representation and reduce the domain bias. Technically, source visual words are selected and updated according to the source feature distribution, and applied to reconstruct the given target feature via a weighted combination strategy. After that, consistency constraints are built between the reconstructed and original target features for domain alignment. Furthermore, STFR is connected with the optimal transportation algorithm theoretically, which explains the rationality of the proposed module. Extensive experiments on nine benchmarks and two cross-domain visual tasks prove the effectiveness of the proposed STFR module, e.g., 1) cross-domain image classification: obtaining average accuracy of 91.0%, 73.9%, and 87.4% on Office-31, Office-Home, and VisDA-2017, respectively; 2) cross-domain object detection: obtaining mAP of 44.50% on Cityscapes -> Foggy Cityscapes, AP on car of 78.10% on Cityscapes -> KITTI, MR(-2 )of 8.63%, 12.27%, 22.10%, and 40.58% on COCOPersons -> Caltech, CityPersons -> Caltech, COCOPersons -> CityPersons, and Caltech -> CityPersons, respectively.
WOS关键词NETWORK
资助项目National Science and Technology Major Project
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:001201858800002
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构National Science and Technology Major Project
源URL[http://ir.ia.ac.cn/handle/173211/58287]  
专题自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队
通讯作者Xu, Changsheng
作者单位1.Nanjing Univ Posts & Telecommun, Sch Commun & Informat Engn, Nanjing 210003, Peoples R China
2.Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
3.Nanjing Univ Posts & Telecommun, Sch Comp Sci, Nanjing 210023, Peoples R China
4.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
推荐引用方式
GB/T 7714
Jiao, Yifan,Yao, Hantao,Bao, Bing-Kun,et al. Source-Guided Target Feature Reconstruction for Cross-Domain Classification and Detection[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2024,33:2808-2822.
APA Jiao, Yifan,Yao, Hantao,Bao, Bing-Kun,&Xu, Changsheng.(2024).Source-Guided Target Feature Reconstruction for Cross-Domain Classification and Detection.IEEE TRANSACTIONS ON IMAGE PROCESSING,33,2808-2822.
MLA Jiao, Yifan,et al."Source-Guided Target Feature Reconstruction for Cross-Domain Classification and Detection".IEEE TRANSACTIONS ON IMAGE PROCESSING 33(2024):2808-2822.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。