DIMSAN: Fast Exploration with the Synergy between Density-based Intrinsic Motivation and Self-adaptive Action Noise
文献类型:会议论文
作者 | Li, Jiayi2,4![]() ![]() ![]() ![]() ![]() ![]() |
出版日期 | 2021-10 |
会议日期 | 2021.5.30-2021.6.5 |
会议地点 | 西安 |
国家 | 中国 |
英文摘要 | Exploration in environments with sparse rewards remains a challenging problem in Deep Reinforcement Learning (DRL). For the off-policy method, it usually needs a large number of training samples. With the growing dimensions of state and action space, this method becomes more and more sample-inefficient. In this paper, we propose a novel fast exploration method for off-policy reinforcement learning, called Density-based Intrinsic Motivation and Self-adaptive Action Noise (DIMSAN). Our main contribution is twofold: (1) We propose a Density-based Intrinsic Motivation (DIM) method. It introduces a new intrinsic-reward generation mechanism based on samples' density estimation during experience replay and encourages the agent to seek novel and unfamiliar states. (2) We propose a Self-adaptive Action Noise (SAN) to deal with the exploration-exploitation tradeoffs, which could automatically change the exploration step through adding adaptive action space noise. The synergy between DIM and SAN could guide the agent to search the state and action space with high efficiency. We evaluate our method on the benchmark manipulation tasks and the designed challenging ones. Empirical results show that our method outperforms the existing methods in terms of convergence speed and sample efficiency, especially in challenging tasks. |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/48540] ![]() |
专题 | 智能机器人系统研究 |
通讯作者 | Lu, Tao |
作者单位 | 1.Research and Development Department, China Academy of Launch Vehicle Technology, Beijing, China 2.State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China 3.Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China. 4.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China |
推荐引用方式 GB/T 7714 | Li, Jiayi,Li, Boyao,Lu, Tao,et al. DIMSAN: Fast Exploration with the Synergy between Density-based Intrinsic Motivation and Self-adaptive Action Noise[C]. 见:. 西安. 2021.5.30-2021.6.5. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。