中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Improving learning efficiency of recurrent neural network through adjusting weights of all layers in a biologically-inspired framework

文献类型:会议论文

作者Xiao Huang; Wei Wu; Peijie Yin; Hong Qiao
出版日期2017
会议日期14-19 May 2017
会议地点Anchorage, Alaska
英文摘要Brain-inspired models have become a focus in artificial intelligence field. As a biologically plausible network, the recurrent neural network in reservoir computing framework has been proposed as a popular model of cortical computation because of its complicated dynamics and highly recurrent connections. To train this network, unlike adjusting only readout weights in liquid computing theory or changing only internal recurrent weights, inspired by global modulation of human emotions on cognition and motion control, we introduce a novel reward-modulated Hebbian learning rule to train the network by adjusting not only the internal recurrent weights but also the input connected weights and readout weights together, with solely delayed, phasic rewards. Experiment results show that the proposed method can train a recurrent neural network in near-chaotic regime to complete the motion control and workingmemory tasks with higher accuracy and learning efficiency.
源URL[http://ir.ia.ac.cn/handle/173211/20101]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_机器人应用与理论组
推荐引用方式
GB/T 7714
Xiao Huang,Wei Wu,Peijie Yin,et al. Improving learning efficiency of recurrent neural network through adjusting weights of all layers in a biologically-inspired framework[C]. 见:. Anchorage, Alaska. 14-19 May 2017.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。