中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
An Iterative Co-Training Transductive Framework for Zero Shot Learning

文献类型:期刊论文

;
作者Liu, Bo1,2; Hu, Lihua3; Dong, Qiulei1,2,4; Hu, Zhanyi1,2
刊名IEEE TRANSACTIONS ON IMAGE PROCESSING ; IEEE TRANSACTIONS ON IMAGE PROCESSING
出版日期2021 ; 2021
卷号30页码:6943-6956
关键词Visualization Visualization Semantics Training Feature extraction Testing Detectors Predictive models Zero-shot learning transductive learning co-training Semantics Training Feature extraction Testing Detectors Predictive models Zero-shot learning transductive learning co-training
ISSN号1057-7149 ; 1057-7149
DOI10.1109/TIP.2021.3100552 ; 10.1109/TIP.2021.3100552
通讯作者Dong, Qiulei(qldong@nlpr.ia.ac.cn)
英文摘要In zero-shot learning (ZSL) community, it is generally recognized that transductive learning performs better than inductive one as the unseen-class samples are also used in its training stage. How to generate pseudo labels for unseen-class samples and how to use such usually noisy pseudo labels are two critical issues in transductive learning. In this work, we introduce an iterative co-training framework which contains two different base ZSL models and an exchanging module. At each iteration, the two different ZSL models are co-trained to separately predict pseudo labels for the unseen-class samples, and the exchanging module exchanges the predicted pseudo labels, then the exchanged pseudo-labeled samples are added into the training sets for the next iteration. By such, our framework can gradually boost the ZSL performance by fully exploiting the potential complementarity of the two models' classification capabilities. In addition, our co-training framework is also applied to the generalized ZSL (GZSL), in which a semantic-guided OOD detector is proposed to pick out the most likely unseen-class samples before class-level classification to alleviate the bias problem in GZSL. Extensive experiments on three benchmarks show that our proposed methods could significantly outperform about 31 state-of-the-art ones.;

In zero-shot learning (ZSL) community, it is generally recognized that transductive learning performs better than inductive one as the unseen-class samples are also used in its training stage. How to generate pseudo labels for unseen-class samples and how to use such usually noisy pseudo labels are two critical issues in transductive learning. In this work, we introduce an iterative co-training framework which contains two different base ZSL models and an exchanging module. At each iteration, the two different ZSL models are co-trained to separately predict pseudo labels for the unseen-class samples, and the exchanging module exchanges the predicted pseudo labels, then the exchanged pseudo-labeled samples are added into the training sets for the next iteration. By such, our framework can gradually boost the ZSL performance by fully exploiting the potential complementarity of the two models' classification capabilities. In addition, our co-training framework is also applied to the generalized ZSL (GZSL), in which a semantic-guided OOD detector is proposed to pick out the most likely unseen-class samples before class-level classification to alleviate the bias problem in GZSL. Extensive experiments on three benchmarks show that our proposed methods could significantly outperform about 31 state-of-the-art ones.

资助项目National Natural Science Foundation of China (NSFC)[61991423] ; National Natural Science Foundation of China (NSFC)[61991423] ; National Natural Science Foundation of China (NSFC)[U1805264] ; Strategic Priority Research Program of the Chinese Academy of Sciences[XDB32050100] ; National Natural Science Foundation of China (NSFC)[U1805264] ; Strategic Priority Research Program of the Chinese Academy of Sciences[XDB32050100]
WOS研究方向Computer Science ; Computer Science ; Engineering ; Engineering
语种英语 ; 英语
WOS记录号WOS:000682121800005 ; WOS:000682121800005
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC ; IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构National Natural Science Foundation of China (NSFC) ; National Natural Science Foundation of China (NSFC) ; Strategic Priority Research Program of the Chinese Academy of Sciences ; Strategic Priority Research Program of the Chinese Academy of Sciences
源URL[http://ir.ia.ac.cn/handle/173211/45622]  
专题自动化研究所_模式识别国家重点实验室_机器人视觉团队
通讯作者Dong, Qiulei
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Future Technol, Beijing 100049, Peoples R China
3.Taiyuan Univ Sci & Technol, Sch Comp Sci & Technol, Taiyuan 030024, Peoples R China
4.Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Liu, Bo,Hu, Lihua,Dong, Qiulei,et al. An Iterative Co-Training Transductive Framework for Zero Shot Learning, An Iterative Co-Training Transductive Framework for Zero Shot Learning[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE TRANSACTIONS ON IMAGE PROCESSING,2021, 2021,30, 30:6943-6956, 6943-6956.
APA Liu, Bo,Hu, Lihua,Dong, Qiulei,&Hu, Zhanyi.(2021).An Iterative Co-Training Transductive Framework for Zero Shot Learning.IEEE TRANSACTIONS ON IMAGE PROCESSING,30,6943-6956.
MLA Liu, Bo,et al."An Iterative Co-Training Transductive Framework for Zero Shot Learning".IEEE TRANSACTIONS ON IMAGE PROCESSING 30(2021):6943-6956.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。