Neighbor-view Enhanced Model for Vision and Language Navigation
文献类型:会议论文
作者 | Dong An![]() ![]() ![]() |
出版日期 | 2021-10 |
会议日期 | 2021-10-20 |
会议地点 | Chengdu, China |
英文摘要 | Vision and Language Navigation (VLN) requires an agent to navigate to a target location by following natural language instructions. Most of existing works represent a navigation candidate by the feature of the corresponding single view where the candidate lies in. However, an instruction may mention landmarks out of the single view as references, which might lead to failures of textual visual matching of existing methods. In this work, we propose a multi-module Neighbor-View Enhanced Model (NvEM) to adaptively incorporate visual contexts from neighbor views for better textual visual matching. Specifically, our NvEM utilizes a subject module and a reference module to collect contexts from neighbor views. The subject module fuses neighbor views at a global level, and the reference module fuses neighbor objects at a local level. Subjects and references are adaptively determined via attention mechanisms. Our model also includes an action module to utilize the strong orientation guidance (e.g., “turn left”) in instructions. Each module predicts navigation action separately and their weighted sum is used for predicting the final action. Extensive experimental results demonstrate the effectiveness of the proposed method on the R2R and R4R benchmarks against several state-of-the-art navigators, and NvEM even beats some pre-training ones. Our code is available at https://github.com/MarSaKi/NvEM. |
会议录 | Proceedings of the ACM International Conference on Multimedia
![]() |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/56610] ![]() |
专题 | 自动化研究所_智能感知与计算研究中心 |
作者单位 | 1.University of Adelaide 2.School of Future Technology, University of Chinese Academy of Sciences 3.Chinese Academy of Sciences, Artificial Intelligence Research (CAS-AIR) 4.Center for Research on Intelligent Perception and Computing, Institution of Automation, Chinese Academy of Sciences 5.Center for Excellence in Brain Science and Intelligence Technology (CEBSIT) |
推荐引用方式 GB/T 7714 | Dong An,Yuankai Qi,Yan Huang,et al. Neighbor-view Enhanced Model for Vision and Language Navigation[C]. 见:. Chengdu, China. 2021-10-20. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。