中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models

文献类型:期刊论文

作者Ma, Chengcheng3,4; Liu, Yang2; Deng, Jiankang1; Xie, Lingxi1; Dong, Weiming4; Xu, Changsheng4
刊名IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
出版日期2023-09-01
卷号33期号:9页码:4616-4629
关键词Vision-language model prompt tuning over-fitting subspace learning gradient projection
ISSN号1051-8215
DOI10.1109/TCSVT.2023.3245584
通讯作者Dong, Weiming(weiming.dong@ia.ac.cn)
英文摘要Pretrained vision-language models (VLMs) such as CLIP have shown impressive generalization capability in downstream vision tasks with appropriate text prompts. Instead of designing prompts manually, Context Optimization (CoOp) has been recently proposed to learn continuous prompts using task-specific training data. Despite the performance improvements on downstream tasks, several studies have reported that CoOp suffers from the overfitting issue in two aspects: (i) the test accuracy on base classes first improves and then worsens during training; (ii) the test accuracy on novel classes keeps decreasing. However, none of the existing studies can understand and mitigate such overfitting problems. In this study, we first explore the cause of overfitting by analyzing the gradient flow. Comparative experiments reveal that CoOp favors generalizable and spurious features in the early and later training stages, respectively, leading to the non-overfitting and overfitting phenomena. Given those observations, we propose Subspace Prompt Tuning (Sub PT) to project the gradients in back-propagation onto the low-rank subspace spanned by the early-stage gradient flow eigenvectors during the entire training process and successfully eliminate the overfitting problem. In addition, we equip CoOp with a Novel Feature Learner (NFL) to enhance the generalization ability of the learned prompts onto novel categories beyond the training set, needless of image training data. Extensive experiments on 11 classification datasets demonstrate that Sub PT+NFL consistently boost the performance of CoOp and outperform the state-of-the-art CoCoOp approach. Experiments on more challenging vision downstream tasks, including open-vocabulary object detection and zero-shot semantic segmentation, also verify the effectiveness of the proposed method. Codes can be found at https://tinyurl.com/mpe64f89.
资助项目National Science Foundation of China[U20B2070] ; National Science Foundation of China[61832016] ; Beijing Natural Science Foundation[L221013]
WOS研究方向Engineering
语种英语
WOS记录号WOS:001063316800016
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
资助机构National Science Foundation of China ; Beijing Natural Science Foundation
源URL[http://ir.ia.ac.cn/handle/173211/53116]  
专题多模态人工智能系统全国重点实验室
通讯作者Dong, Weiming
作者单位1.Huawei Inc, Shenzhen 518129, Peoples R China
2.Alibaba DAMO Acad, Hangzhou 310024, Peoples R China
3.Univ Chinese Acad Sci UCAS, Sch Artificial Intelligence, Beijing 100049, Peoples R China
4.Chinese Acad Sci CASIA, Inst Automat, Natl Lab Pattern Recognit NLPR, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Ma, Chengcheng,Liu, Yang,Deng, Jiankang,et al. Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2023,33(9):4616-4629.
APA Ma, Chengcheng,Liu, Yang,Deng, Jiankang,Xie, Lingxi,Dong, Weiming,&Xu, Changsheng.(2023).Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,33(9),4616-4629.
MLA Ma, Chengcheng,et al."Understanding and Mitigating Overfitting in Prompt Tuning for Vision-Language Models".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 33.9(2023):4616-4629.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。