Boosting On-Policy Actor-Critic With Shallow Updates in Critic
文献类型:期刊论文
作者 | Li, Luntong1,2; Zhu, Yuanheng1,2![]() |
刊名 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
![]() |
出版日期 | 2024-04-15 |
页码 | 10 |
关键词 | Artificial neural networks Vectors Task analysis Training Representation learning Approximation algorithms Optimization Actor-critic deep reinforcement learning (DRL) proximal policy optimization (PPO) shallow reinforcement learning (SRL) |
ISSN号 | 2162-237X |
DOI | 10.1109/TNNLS.2024.3378913 |
通讯作者 | Zhu, Yuanheng(yuanheng.zhu@ia.ac.cn) |
英文摘要 | Deep reinforcement learning (DRL) benefits from the representation power of deep neural networks (NNs), to approximate the value function and policy in the learning process. Batch reinforcement learning (BRL) benefits from stable training and data efficiency with fixed representation and enjoys solid theoretical analysis. This work proposes least-squares deep policy gradient (LSDPG), a hybrid approach that combines least-squares reinforcement learning (RL) with online DRL to achieve the best of both worlds. LSDPG leverages a shared network to share useful features between policy (actor) and value function (critic). LSDPG learns policy, value function, and representation separately. First, LSDPG views deep NNs of the critic as a linear combination of representation weighted by the weights of the last layer and performs policy evaluation with regularized least-squares temporal difference (LSTD) methods. Second, arbitrary policy gradient algorithms can be applied to improve the policy. Third, an auxiliary task is used to periodically distill the features from the critic into the representation. Unlike most DRL methods, where the critic algorithms are often used in a nonstationary situation, i.e., the policy to be evaluated is changing, the critic in LSDPG is working on a stationary case in each iteration of the critic update. We prove that, under some conditions, the critic converges to the regularized TD fixpoint of current policy, and the actor converges to the local optimal policy. The experimental results on challenging Procgen benchmark illustrate the improvement of sample efficiency of LSDPG over proximal policy optimization and phasic policy gradient (PPG). |
资助项目 | National Natural Science Foundation of China |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:001205847500001 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
资助机构 | National Natural Science Foundation of China |
源URL | [http://ir.ia.ac.cn/handle/173211/58300] ![]() |
专题 | 复杂系统管理与控制国家重点实验室_深度强化学习 |
通讯作者 | Zhu, Yuanheng |
作者单位 | 1.Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence S, Beijing 100190, Peoples R China 2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China |
推荐引用方式 GB/T 7714 | Li, Luntong,Zhu, Yuanheng. Boosting On-Policy Actor-Critic With Shallow Updates in Critic[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2024:10. |
APA | Li, Luntong,&Zhu, Yuanheng.(2024).Boosting On-Policy Actor-Critic With Shallow Updates in Critic.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,10. |
MLA | Li, Luntong,et al."Boosting On-Policy Actor-Critic With Shallow Updates in Critic".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2024):10. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。