Part-aware Prompt Tuning For Weakly Supervised Referring Expression Grounding
文献类型:会议论文
作者 | Chenlin, Zhao1,3,4![]() ![]() ![]() |
出版日期 | 2024-01-29 |
会议日期 | 2024-1-29 |
会议地点 | Amsterdam |
英文摘要 | Referring expression grounding represents a complex multimodal task that merits meticulous investigation. To alleviate the conventional methods' reliance on fine-grained supervised data, there is a pressing need to explore visual grounding techniques under the weakly-supervised setting, encompassing only image-text pairs. Weakly supervised method with pretrained multimodal model has achieved impressive results; however, during the inference phase, it fails to generate a comprehensive attention map for entities, consequently leading to a reduction in inference accuracy. In this study, we introduce Part-aware Prompt Tuning (PPT), an innovative weakly supervised method. By dividing the entities extracted by the detector into different parts to optimize the part-aware prompt during the training phase, these prompt can guide the attention of pretrained multimodal model during the inference phase to obtain a more comprehensive focus on the whole entity, thereby enhancing inference accuracy. Empirical validation on two benchmark datasets, RefCOCO and RefCOCO+, underscores the remarkable superiority of our proposed method over prior referring expression grounding methods. |
会议录出版者 | Springer Cham |
源URL | [http://ir.ia.ac.cn/handle/173211/57464] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队 |
通讯作者 | Changsheng, Xu |
作者单位 | 1.State Key Laboratory of Multimodal Artificial Intelligence Systems(MAIS), Institute of Automation, Chinese Academy of Sciences(CASIA) 2.East China Normal University 3.Damo Academy, Alibaba Group 4.School of Artificial Intelligence, University of Chinese Academy of Science(UCAS) 5.Peng Cheng Laboratory |
推荐引用方式 GB/T 7714 | Chenlin, Zhao,Jiabo, Ye,Yaguang, Song,et al. Part-aware Prompt Tuning For Weakly Supervised Referring Expression Grounding[C]. 见:. Amsterdam. 2024-1-29. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。