中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Mixing Update Q-value for Deep Reinforcement Learning

文献类型:会议论文

作者Li Zhunan1,2; Hou Xinwen2
出版日期2019-09
会议日期2019/7/14-19
会议地点Budapest, Hungary
DOI10.1109/IJCNN.2019.8852397
页码1-6
英文摘要

The value-based reinforcement learning methods are known to overestimate action values such as deep Q-learning, which could lead to suboptimal policies. This problem also persists in an actor-critic algorithm. In this paper, we propose a novel mechanism to minimize its effects on both the critic and the actor. Our mechanism builds on Double Q-learning, by mixing update action value based on the minimum and maximum between a pair of critics to limit the overestimation. We then propose a specific adaptation to the Twin Delayed Deep Deterministic policy gradient algorithm (TD3) and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several tasks.

源文献作者IEEE
会议录出版者IEEE
语种英语
源URL[http://ir.ia.ac.cn/handle/173211/39160]  
专题智能系统与工程
通讯作者Hou Xinwen
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences
2.Center for Research on Intelligent System and Engineering, Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Li Zhunan,Hou Xinwen. Mixing Update Q-value for Deep Reinforcement Learning[C]. 见:. Budapest, Hungary. 2019/7/14-19.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。