Embedded prompt tuning: Towards enhanced calibration of pretrained models for medical images
文献类型:期刊论文
作者 | Zu, Wenqiang1; Xie, Shenghao2,5; Zhao, Qing3; Li, Guoqi4![]() ![]() |
刊名 | MEDICAL IMAGE ANALYSIS
![]() |
出版日期 | 2024-10-01 |
卷号 | 97页码:13 |
关键词 | Foundation model Parameter-efficient fine-tuning Visual prompt tuning Few-shot medical image analysis |
ISSN号 | 1361-8415 |
DOI | 10.1016/j.media.2024.103258 |
通讯作者 | Ma, Lei(lei.ma@pku.edu.cn) |
英文摘要 | Foundation models pre-trained on large-scale data have been widely witnessed to achieve success in various natural imaging downstream tasks. Parameter-efficient fine-tuning (PEFT) methods aim to adapt foundation models to new domains by updating only a small portion of parameters in order to reduce computational overhead. However, the effectiveness of these PEFT methods, especially in cross-domain few-shot scenarios, e.g., medical image analysis, has not been fully explored. In this work, we facilitate the study of the performance of PEFT when adapting foundation models to medical image classification tasks. Furthermore, to alleviate the limitations of prompt introducing ways and approximation capabilities on Transformer architectures of mainstream prompt tuning methods, we propose the Embedded Prompt Tuning (EPT) method by embedding prompt tokens into the expanded channels. We also find that there are anomalies in the feature space distribution of foundation models during pre-training process, and prompt tuning can help mitigate this negative impact. To explain this phenomenon, we also introduce a novel perspective to understand prompt tuning: Prompt tuning is a distribution calibrator. And we support it by analysing patch-wise scaling and feature separation operations contained in EPT. Our experiments show that EPT outperforms several stateof-the-art fine-tuning methods by a significant margin on few-shot medical image classification tasks, and completes the fine-tuning process within highly competitive time, indicating EPT is an effective PEFT method. The source code is available at github.com/zuwenqiang/EPT. |
WOS关键词 | FOUNDATION MODEL |
资助项目 | Beijing Natural Science Foundation[JQ24023] ; Project of Beijing Science and Technology Committee[Z231100006623010] ; National Science and Technology Major Project[2022ZD0116305] |
WOS研究方向 | Computer Science ; Engineering ; Radiology, Nuclear Medicine & Medical Imaging |
语种 | 英语 |
WOS记录号 | WOS:001269431800001 |
出版者 | ELSEVIER |
资助机构 | Beijing Natural Science Foundation ; Project of Beijing Science and Technology Committee ; National Science and Technology Major Project |
源URL | [http://ir.ia.ac.cn/handle/173211/59230] ![]() |
专题 | 数字内容技术与服务研究中心_听觉模型与认知计算 |
通讯作者 | Ma, Lei |
作者单位 | 1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China 2.Peking Univ, Acad Adv Interdisciplinary Studies, Beijing, Peoples R China 3.Peking Univ, Coll Future Technol, Natl Biomed Imaging Ctr, Beijing, Peoples R China 4.Chinese Acad Sci, Inst Automat, Beijing, Peoples R China 5.Wuhan Univ, Sch Cyber Sci & Engn, Wuhan, Peoples R China 6.Beijing Acad Artificial Intelligence, Beijing, Peoples R China |
推荐引用方式 GB/T 7714 | Zu, Wenqiang,Xie, Shenghao,Zhao, Qing,et al. Embedded prompt tuning: Towards enhanced calibration of pretrained models for medical images[J]. MEDICAL IMAGE ANALYSIS,2024,97:13. |
APA | Zu, Wenqiang,Xie, Shenghao,Zhao, Qing,Li, Guoqi,&Ma, Lei.(2024).Embedded prompt tuning: Towards enhanced calibration of pretrained models for medical images.MEDICAL IMAGE ANALYSIS,97,13. |
MLA | Zu, Wenqiang,et al."Embedded prompt tuning: Towards enhanced calibration of pretrained models for medical images".MEDICAL IMAGE ANALYSIS 97(2024):13. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。