Wd3: Taming the estimation bias in deep reinforcement learning
文献类型:会议论文
作者 | He Q(何强)1,2![]() ![]() |
出版日期 | 2020-12 |
会议日期 | 2020-12 |
会议地点 | Baltimore, MD, USA |
关键词 | deep reinforcement learning estimation bias neural networks |
DOI | 10.1109/ICTAI50040.2020.00068 |
英文摘要 | The overestimation phenomenon caused by function approximation is a well-known issue in value-based reinforcement learning algorithms such as deep Q-networks and DDPG, which could lead to suboptimal policies. To address this issue, TD3 takes the minimum value between a pair of critics, which introduces underestimation bias. By unifying these two opposites, we propose a novel Weighted Delayed Deep Deterministic Policy Gradient algorithm, which can reduce the estimation error and further improve the performance by weighting a pair of critics. We compare the learning process of value function between DDPG, TD3, and our proposed algorithm, which verifies that our algorithm could indeed eliminate the estimation error of value function. We evaluate our algorithm in the OpenAI Gym continuous control tasks, outperforming the state-of-the-art algorithms on every environment tested. |
语种 | 英语 |
URL标识 | 查看原文 |
源URL | [http://ir.ia.ac.cn/handle/173211/48893] ![]() |
专题 | 综合信息系统研究中心_脑机融合与认知评估 |
通讯作者 | Hou XW(侯新文) |
作者单位 | 1.University of Chinese Academy of Sciences 2.Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | He Q,Hou XW. Wd3: Taming the estimation bias in deep reinforcement learning[C]. 见:. Baltimore, MD, USA. 2020-12. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。