中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Reinforcement Learning and Deep Learning based Lateral Control for Autonomous Driving

文献类型:期刊论文

;
作者Dong Li1,2; Dongbin Zhao1,2; Qichao Zhang1,2; Yaran Chen1,2
刊名IEEE Computational Intelligence Magazine ; IEEE Computational Intelligence Magazine
出版日期2019-04 ; 2019-04
卷号14期号:2页码:83-98
关键词Deep Learning Autonomous Driving Visual Control Reinforcement Learning Deep Learning Autonomous Driving Visual Control Reinforcement Learning
ISSN号1556-603X ; 1556-603X
英文摘要

This paper investigates the vision-based autonomous driving with deep learning and reinforcement learning methods. Different from the end-to-end learning method, our method breaks the vision-based lateral control system down into a perception module and a control module. The perception module which is based on a multi-task learning neural network first takes a driver-view image as its input and predicts the track features. The control module which is based on reinforcement learning then makes a control decision based on these features. In order to improve the data efficiency, we propose visual TORCS (VTORCS), a deep reinforcement learning environment which is based on the open racing car simulator (TORCS). By means of the provided functions, one can train an agent with the input of an image or various physical sensor measurement, or evaluate the perception algorithm on this simulator. The trained reinforcement learning controller outperforms the linear quadratic regulator (LQR) controller and model predictive control (MPC) controller on different tracks. The experiments demonstrate that the perception module shows promising performance and the controller is capable of controlling the vehicle drive well along the track center with visual input.

;

This paper investigates the vision-based autonomous driving with deep learning and reinforcement learning methods. Different from the end-to-end learning method, our method breaks the vision-based lateral control system down into a perception module and a control module. The perception module which is based on a multi-task learning neural network first takes a driver-view image as its input and predicts the track features. The control module which is based on reinforcement learning then makes a control decision based on these features. In order to improve the data efficiency, we propose visual TORCS (VTORCS), a deep reinforcement learning environment which is based on the open racing car simulator (TORCS). By means of the provided functions, one can train an agent with the input of an image or various physical sensor measurement, or evaluate the perception algorithm on this simulator. The trained reinforcement learning controller outperforms the linear quadratic regulator (LQR) controller and model predictive control (MPC) controller on different tracks. The experiments demonstrate that the perception module shows promising performance and the controller is capable of controlling the vehicle drive well along the track center with visual input.

语种英语 ; 英语
源URL[http://ir.ia.ac.cn/handle/173211/23517]  
专题复杂系统管理与控制国家重点实验室_深度强化学习
通讯作者Dongbin Zhao
作者单位1.The State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Dong Li,Dongbin Zhao,Qichao Zhang,et al. Reinforcement Learning and Deep Learning based Lateral Control for Autonomous Driving, Reinforcement Learning and Deep Learning based Lateral Control for Autonomous Driving[J]. IEEE Computational Intelligence Magazine, IEEE Computational Intelligence Magazine,2019, 2019,14, 14(2):83-98, 83-98.
APA Dong Li,Dongbin Zhao,Qichao Zhang,&Yaran Chen.(2019).Reinforcement Learning and Deep Learning based Lateral Control for Autonomous Driving.IEEE Computational Intelligence Magazine,14(2),83-98.
MLA Dong Li,et al."Reinforcement Learning and Deep Learning based Lateral Control for Autonomous Driving".IEEE Computational Intelligence Magazine 14.2(2019):83-98.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。