Learning allocentric representations of space for navigation
文献类型:期刊论文
作者 | Zhao DY(赵冬晔)1,4,5; Si BL(斯白露)3; Li XL(李小俚)2 |
刊名 | Neurocomputing |
出版日期 | 2021 |
卷号 | 453页码:579-589 |
ISSN号 | 0925-2312 |
关键词 | Deep learning Localization Large-scale environment Place cells Sensorimotor integration HippDNN |
产权排序 | 1 |
英文摘要 | The hippocampus of the mammalian brain supports spatial navigation by building cognitive maps of the environments in which the animal explores. Currently, there is little neurocomputational work investigating the encoding and decoding mechanisms of hippocampal neural representations in large-scale environments. We propose a biologically-inspired hierarchical neural network architecture to learn the transformation of egocentric sensorimotor inputs into allocentric spatial representation for navigation. The hierarchical network is composed of two parallel subnetworks mimicking the lateral entorhinal cortex (LEC) and medial entorhinal cortex (MEC), and one convergent subnetwork mimicking the hippocampus. LEC relays time-related visual information and MEC supplies space-related information in the form of multi-resolution grid codes as resulted from integrating movement information. The convergent subnetwork integrates all information from the parallel subnetworks and predicts the position of the agent in the environment. Synaptic weights of the vision-to-place and grid-to-place connections are learned based on the stochastic gradient descent algorithm. Simulations in a large virtual maze demonstrate that hippocampal place units in the model form multiple and irregularly-spaced place fields, similar to those observed in neurobiological experiments. The model is able to accurately decode the positions of the agent from the learned spatial representations. Moreover, the model is capable of adaptation to degraded visual inputs, and therefore is robust against perturbations. When the motion inputs are deprived, the model meets with localization difficulty, suffering from less accuracy in position predictions. |
语种 | 英语 |
WOS记录号 | WOS:000663418700014 |
资助机构 | National Key Research and Development Program of China (No. 2016YFC0801808) ; Shenzhen-Hong Kong Institute of Brain Science - Shenzhen Fundamental Research Institutions (Project number NYKFKT20190018) |
源URL | [http://ir.sia.cn/handle/173321/28408] |
专题 | 沈阳自动化研究所_机器人学研究室 |
通讯作者 | Si BL(斯白露) |
作者单位 | 1.University of Chinese Academy of Sciences, Beijing 100049, China 2.State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China 3.School of Systems Science, Beijing Normal University, Beijing 100875, China 4.State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China 5.Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China |
推荐引用方式 GB/T 7714 | Zhao DY,Si BL,Li XL. Learning allocentric representations of space for navigation[J]. Neurocomputing,2021,453:579-589. |
APA | Zhao DY,Si BL,&Li XL.(2021).Learning allocentric representations of space for navigation.Neurocomputing,453,579-589. |
MLA | Zhao DY,et al."Learning allocentric representations of space for navigation".Neurocomputing 453(2021):579-589. |
入库方式: OAI收割
来源:沈阳自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。