Latent Landmark Graph for Efficient Exploration-Exploitation Balance in Hierarchical Reinforcement Learning
文献类型:期刊论文
作者 | Zhang Qingyang1,2![]() ![]() ![]() |
刊名 | Machine Intelligence Research
![]() |
出版日期 | 2023-10 |
页码 | 158 |
英文摘要 | Goal-Conditioned Hierarchical Reinforcement Learning (GCHRL) decomposes the desired goal into subgoals and conducts exploration and exploitation in the subgoal space. Its effectiveness heavily relies on subgoal representation and selection. However, existing works do not consider the distinct information across hierarchical time scales when learning subgoal representations, and lack a subgoal selection strategy that balances exploration and exploitation. In this paper, we propose a novel method for efficient exploration-exploitation balance in HIerarchical reinforcement learning by dynamically constructing Latent Landmark graphs (HILL). HILL transforms the reward maximization problem of GCHRL into the shortest path planning on graphs. To effectively consider the hierarchical time-scale information, HILL adopts a contrastive representation learning objective to learn informative latent representations. Based on these representations, HILL dynami cally constructs latent landmark graphs and selects subgoals using two measures to balance exploration and exploitation. We implement two variants: HILL-hf generates graphs periodically, while HILL-lf gener ates graphs adaptively. Empirical results on continuous control tasks with sparse rewards demonstrate that both variants outperform state-of the-art baselines in sample efficiency and asymptotic performance, with HILL-lf further reducing training time by 40% compared to HILL-hf. |
源URL | [http://ir.ia.ac.cn/handle/173211/57586] ![]() |
专题 | 数字内容技术与服务研究中心_听觉模型与认知计算 |
作者单位 | 1.中国科学院大学未来技术学院 2.中国科学院自动化研究所 3.中国科学院大学人工智能学院 4.Department of Computing Science,University of Alberta,Edmonton,T6G 2E8,Canada |
推荐引用方式 GB/T 7714 | Zhang Qingyang,Zhang Hongming,Xing Dengpeng,et al. Latent Landmark Graph for Efficient Exploration-Exploitation Balance in Hierarchical Reinforcement Learning[J]. Machine Intelligence Research,2023:158. |
APA | Zhang Qingyang,Zhang Hongming,Xing Dengpeng,&Bo Xu.(2023).Latent Landmark Graph for Efficient Exploration-Exploitation Balance in Hierarchical Reinforcement Learning.Machine Intelligence Research,158. |
MLA | Zhang Qingyang,et al."Latent Landmark Graph for Efficient Exploration-Exploitation Balance in Hierarchical Reinforcement Learning".Machine Intelligence Research (2023):158. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。