中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems

文献类型:期刊论文

作者Xu, Zhenhui2; Shen, Tielong2; Cheng, Daizhan1
刊名IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
出版日期2022-04-01
卷号33期号:4页码:1520-1534
关键词Mathematical model Trajectory Heuristic algorithms Optimal control System dynamics Artificial neural networks Convergence Approximate optimal control design auxiliary trajectory completely model-free integral reinforcement learning (IRL)
ISSN号2162-237X
DOI10.1109/TNNLS.2020.3042589
英文摘要In this article, a novel integral reinforcement learning (IRL) algorithm is proposed to solve the optimal control problem for continuous-time nonlinear systems with unknown dynamics. The main challenging issue in learning is how to reject the oscillation caused by the externally added probing noise. This article challenges the issue by embedding an auxiliary trajectory that is designed as an exciting signal to learn the optimal solution. First, the auxiliary trajectory is used to decompose the state trajectory of the controlled system. Then, by using the decoupled trajectories, a model-free policy iteration (PI) algorithm is developed, where the policy evaluation step and the policy improvement step are alternated until convergence to the optimal solution. It is noted that an appropriate external input is introduced at the policy improvement step to eliminate the requirement of the input-to-state dynamics. Finally, the algorithm is implemented on the actor-critic structure. The output weights of the critic neural network (NN) and the actor NN are updated sequentially by the least-squares methods. The convergence of the algorithm and the stability of the closed-loop system are guaranteed. Two examples are given to show the effectiveness of the proposed algorithm.
资助项目JSPS KAKENHI[17H03284]
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:000778930100016
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
源URL[http://ir.amss.ac.cn/handle/2S8OKBNM/60285]  
专题中国科学院数学与系统科学研究院
通讯作者Xu, Zhenhui
作者单位1.Chinese Acad Sci, Acad Math & Syst Sci, Key Lab Syst & Control, Beijing 100190, Peoples R China
2.Sophia Univ, Dept Engn & Appl Sci, Tokyo 1028554, Japan
推荐引用方式
GB/T 7714
Xu, Zhenhui,Shen, Tielong,Cheng, Daizhan. Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2022,33(4):1520-1534.
APA Xu, Zhenhui,Shen, Tielong,&Cheng, Daizhan.(2022).Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,33(4),1520-1534.
MLA Xu, Zhenhui,et al."Model-Free Reinforcement Learning by Embedding an Auxiliary System for Optimal Control of Nonlinear Systems".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 33.4(2022):1520-1534.

入库方式: OAI收割

来源:数学与系统科学研究院

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。