Improving Visual Grounding With Visual-Linguistic Verification and Iterative Reasoning
文献类型:会议论文
作者 | Li Yang1,3![]() ![]() ![]() ![]() ![]() ![]() |
出版日期 | 2022-06 |
会议日期 | 2022-6 |
会议地点 | New Orleans, Louisiana |
英文摘要 | Visual grounding is a task to locate the target indicated by a natural language expression. Existing methods extend the generic object detection framework to this problem. They base the visual grounding on the features from pre-generated proposals or anchors, and fuse these features with the text embeddings to locate the target mentioned by the text. However, modeling the visual features from these predefined locations may fail to fully exploit the visual context and attribute information in the text query, which limits their performance. In this paper, we propose a transformer-based framework for accurate visual grounding by establishing text-conditioned discriminative features and performing multi-stage cross-modal reasoning. Specifically, we develop a visual-linguistic verification module to focus the visual features on regions relevant to the textual descriptions while suppressing the unrelated areas. A language-guided feature encoder is also devised to aggregate the visual contexts of the target object to improve the object's distinctiveness. To retrieve the target from the encoded visual features, we further propose a multi-stage cross-modal decoder to iteratively speculate on the correlations between the image and text for accurate target localization. Extensive experiments on five widely used datasets validate the efficacy of our proposed components and demonstrate state-of-the-art performance. |
会议录出版者 | Institute of Electrical and Electronics Engineers (IEEE) |
源URL | [http://ir.ia.ac.cn/handle/173211/52140] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_视频内容安全团队 |
通讯作者 | Chunfeng Yuan |
作者单位 | 1.NLPR, Institute of Automation, Chinese Academy of Sciences 2.CAS Center for Excellence in Brain Science and Intelligence Technology 3.School of Artificial Intelligence, University of Chinese Academy of Sciences 4.The Chinese University of Hong Kong |
推荐引用方式 GB/T 7714 | Li Yang,Yan Xu,Chunfeng Yuan,et al. Improving Visual Grounding With Visual-Linguistic Verification and Iterative Reasoning[C]. 见:. New Orleans, Louisiana. 2022-6. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。