|
作者 | Luntong Li; Yuanheng Zhu
|
刊名 | IEEE Transactions on Neural Networks and Learning Systems
 |
出版日期 | 2024
|
页码 | 1-10 |
英文摘要 | Deep reinforcement learning (DRL) benefits from
the representation power of deep neural networks (NNs),
to approximate the value function and policy in the learning
process. Batch reinforcement learning (BRL) benefits from stable
training and data efficiency with fixed representation and enjoys
solid theoretical analysis. This work proposes least-squares deep
policy gradient (LSDPG), a hybrid approach that combines
least-squares reinforcement learning (RL) with online DRL to
achieve the best of both worlds. LSDPG leverages a shared
network to share useful features between policy (actor) and
value function (critic). LSDPG learns policy, value function, and
representation separately. First, LSDPG views deep NNs of the
critic as a linear combination of representation weighted by the
weights of the last layer and performs policy evaluation with
regularized least-squares temporal difference (LSTD) methods.
Second, arbitrary policy gradient algorithms can be applied to
improve the policy. Third, an auxiliary task is used to periodically
distill the features from the critic into the representation. Unlike
most DRL methods, where the critic algorithms are often used
in a nonstationary situation, i.e., the policy to be evaluated is
changing, the critic in LSDPG is working on a stationary case
in each iteration of the critic update. We prove that, under some
conditions, the critic converges to the regularized TD fixpoint of
current policy, and the actor converges to the local optimal policy.
The experimental results on challenging Procgen benchmark
illustrate the improvement of sample efficiency of LSDPG over
proximal policy optimization and phasic policy gradient (PPG). |
源URL | [http://ir.ia.ac.cn/handle/173211/57222]  |
专题 | 复杂系统管理与控制国家重点实验室_深度强化学习
|
通讯作者 | Yuanheng Zhu |
作者单位 | chinese academy of sciences, institute of automation
|
推荐引用方式 GB/T 7714 |
Luntong Li,Yuanheng Zhu. Boosting On-Policy Actor–Critic With Shallow Updates in Critic[J]. IEEE Transactions on Neural Networks and Learning Systems,2024:1-10.
|
APA |
Luntong Li,&Yuanheng Zhu.(2024).Boosting On-Policy Actor–Critic With Shallow Updates in Critic.IEEE Transactions on Neural Networks and Learning Systems,1-10.
|
MLA |
Luntong Li,et al."Boosting On-Policy Actor–Critic With Shallow Updates in Critic".IEEE Transactions on Neural Networks and Learning Systems (2024):1-10.
|