中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Exploring Explicitly Disentangled Features for Domain Generalization

文献类型:期刊论文

作者Li, Jingwei1,2; Li, Yuan1,2; Wang, Huanjie1,2; Liu, Chengbao2; Tan, Jie1,2
刊名IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
出版日期2023-11-01
卷号33期号:11页码:6360-6373
ISSN号1051-8215
关键词Domain generalization feature disentanglement Fourier transform data augmentation
DOI10.1109/TCSVT.2023.3269534
通讯作者Tan, Jie(tan.jie@tom.com)
英文摘要Domain generalization (DG) is a challenging task that aims to train a robust model with only labeled source data and can generalize well on unseen target data. The domain gap between the source and target data may degrade the performance. A plethora of methods resort to obtaining domain-invariant features to overcome the difficulties. However, these methods require sophisticated network designs or training strategies, causing inefficiency and complexity. In this paper, we first analyze and reclassify the features into two categories, i.e., implicitly disentangled ones and explicitly disentangled counterparts. Since we aim to design a generic algorithm for DG to alleviate the problems mentioned above, we focus more on the explicitly disentangled features due to their simplicity and interpretability. We find out that the shape features of images are simple and elegant choices based on our analysis. We extract the shape features from two aspects. In the aspect of networks, we propose Multi-Scale Amplitude Mixing (MSAM) to strengthen shape features at different layers of the network by Fourier transform. In the aspect of inputs, we propose a new data augmentation method called Random Shape Warping (RSW) to facilitate the model to concentrate more on the global structures of the objects. RSW randomly distorts the local parts of the images and keeps the global structures unchanged, which can further improve the robustness of the model. Our methods are simple yet efficient and can be conveniently used as plug-and-play modules. They can outperform state-of-the-art (SOTA) methods without bells and whistles.
资助项目National Key Research and Development Program of China[2022YFB3304602] ; National Natural Science Foundation of China[62003344]
WOS研究方向Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:001093434100012
资助机构National Key Research and Development Program of China ; National Natural Science Foundation of China
源URL[http://ir.ia.ac.cn/handle/173211/54433]  
专题中科院工业视觉智能装备工程实验室
通讯作者Tan, Jie
作者单位1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 101408, Peoples R China
2.Chinese Acad Sci, Inst Automat, Beijing 100080, Peoples R China
推荐引用方式
GB/T 7714
Li, Jingwei,Li, Yuan,Wang, Huanjie,et al. Exploring Explicitly Disentangled Features for Domain Generalization[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2023,33(11):6360-6373.
APA Li, Jingwei,Li, Yuan,Wang, Huanjie,Liu, Chengbao,&Tan, Jie.(2023).Exploring Explicitly Disentangled Features for Domain Generalization.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,33(11),6360-6373.
MLA Li, Jingwei,et al."Exploring Explicitly Disentangled Features for Domain Generalization".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 33.11(2023):6360-6373.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。