中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
SgVA-CLIP: Semantic-Guided Visual Adapting of Vision-Language Models for Few-Shot Image Classification

文献类型:期刊论文

作者Peng, Fang1,2,4; Yang, Xiaoshan2,3,4; Xiao, Linhui1,2,4; Wang, Yaowei4; Xu, Changsheng2,3,4
刊名IEEE TRANSACTIONS ON MULTIMEDIA
出版日期2024
卷号26页码:3469-3480
关键词Few-shot image classification vision-language models
ISSN号1520-9210
DOI10.1109/TMM.2023.3311646
通讯作者Xu, Changsheng(csxu@nlpr.ia.ac.cn)
英文摘要Although significant progress has been made in few-shot learning, most of existing few-shot image classification methods require supervised pre-training on a large amount of samples of base classes, which limits their generalization ability in real world application. Recently, large-scale Vision-Language Pre-trained models (VLPs) have been gaining increasing attention in few-shot learning because they can provide a new paradigm for transferable visual representation learning with easily available text on the Web. However, the VLPs may neglect detailed visual information that is difficult to describe by language sentences, but important for learning an effective classifier to distinguish different images. To address the above problem, we propose a new framework, named Semantic-guided Visual Adapting (SgVA), which can effectively extend vision-language pre-trained models to produce discriminative adapted visual features by comprehensively using an implicit knowledge distillation, a vision-specific contrastive loss, and a cross-modal contrastive loss. The implicit knowledge distillation is designed to transfer the fine-grained cross-modal knowledge to guide the updating of the vision adapter. State-of-the-art results on 13 datasets demonstrate that the adapted visual features can well complement the cross-modal features to improve few-shot image classification.
资助项目National Natural Science Foundation of China
WOS研究方向Computer Science ; Telecommunications
语种英语
WOS记录号WOS:001165348200021
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构National Natural Science Foundation of China
源URL[http://ir.ia.ac.cn/handle/173211/58025]  
专题自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队
通讯作者Xu, Changsheng
作者单位1.Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
3.Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China
4.Peng Cheng Lab, Shenzhen 518066, Peoples R China
推荐引用方式
GB/T 7714
Peng, Fang,Yang, Xiaoshan,Xiao, Linhui,et al. SgVA-CLIP: Semantic-Guided Visual Adapting of Vision-Language Models for Few-Shot Image Classification[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2024,26:3469-3480.
APA Peng, Fang,Yang, Xiaoshan,Xiao, Linhui,Wang, Yaowei,&Xu, Changsheng.(2024).SgVA-CLIP: Semantic-Guided Visual Adapting of Vision-Language Models for Few-Shot Image Classification.IEEE TRANSACTIONS ON MULTIMEDIA,26,3469-3480.
MLA Peng, Fang,et al."SgVA-CLIP: Semantic-Guided Visual Adapting of Vision-Language Models for Few-Shot Image Classification".IEEE TRANSACTIONS ON MULTIMEDIA 26(2024):3469-3480.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。