Proximal policy optimization with model-based methods
文献类型:期刊论文
作者 | Li SL(李帅龙)4,5,6; Zhang W(张伟)5,6![]() ![]() |
刊名 | JOURNAL OF INTELLIGENT & FUZZY SYSTEMS
![]() |
出版日期 | 2022 |
卷号 | 42期号:6页码:5399-5410 |
关键词 | Model-based reinforcement learning model-free reinforcement learning policy optimization method |
ISSN号 | 1064-1246 |
产权排序 | 1 |
英文摘要 | Model-free reinforcement learning methods have successfully been applied to practical applications such as decision-making problems in Atari games. However, these methods have inherent shortcomings, such as a high variance and low sample efficiency. To improve the policy performance and sample efficiency of model-free reinforcement learning, we propose proximal policy optimization with model-based methods (PPOMM), a fusion method of both model-based and model-free reinforcement learning. PPOMM not only considers the information of past experience but also the prediction information of the future state. PPOMM adds the information of the next state to the objective function of the proximal policy optimization (PPO) algorithm through a model-based method. This method uses two components to optimize the policy: the error of PPO and the error of model-based reinforcement learning. We use the latter to optimize a latent transition model and predict the information of the next state. For most games, this method outperforms the state-of-the-art PPO algorithm when we evaluate across 49 Atari games in the Arcade Learning Environment (ALE). The experimental results show that PPOMM performs better or the same as the original algorithm in 33 games. |
语种 | 英语 |
WOS记录号 | WOS:000790690300042 |
资助机构 | National Natural Science Foundation of China [52175272] ; Joint Fund of Science & Technology Department of Liaoning Province ; State Key Laboratory of Robotics, China [2020-KF-22-03] ; StateKey Laboratory of Robotics Foundation [Y91Z0303] ; China Postdoctoral Science Foundation [2020M670814] ; Liaoning Provincial Natural Science Foundation [2020-MS-033] |
源URL | [http://ir.sia.cn/handle/173321/30987] ![]() |
专题 | 沈阳自动化研究所_空间自动化技术研究室 |
通讯作者 | Zhang W(张伟); LLeng YQ(冷雨泉) |
作者单位 | 1.Guangdong Provincial Key Laboratory of Human-Augmentation and Rehabilitation Robotics in Universities, Southern University of Science and Technology, Shenzhen, China 2.Shenzhen Key Laboratory of Biomimetic Robotics and Intelligent Systems, Department of Mechanical and Energy Engineering, Southern University of Science and Technology, Shenzhen, China 3.CVTE Research, Guangzhou, P.R. China 4.University of Chinese Academy of Sciences, Beijing, China 5.Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China 6.State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China |
推荐引用方式 GB/T 7714 | Li SL,Zhang W,Zhang HW,et al. Proximal policy optimization with model-based methods[J]. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS,2022,42(6):5399-5410. |
APA | Li SL,Zhang W,Zhang HW,Zhang X,&LLeng YQ.(2022).Proximal policy optimization with model-based methods.JOURNAL OF INTELLIGENT & FUZZY SYSTEMS,42(6),5399-5410. |
MLA | Li SL,et al."Proximal policy optimization with model-based methods".JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 42.6(2022):5399-5410. |
入库方式: OAI收割
来源:沈阳自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。