Traffic Signal Control Using Offline Reinforcement Learning
文献类型:会议论文
作者 | Dai, Xingyuan1,2![]() ![]() ![]() ![]() ![]() |
出版日期 | 2021-10 |
会议日期 | 2021-10 |
会议地点 | Beijing |
英文摘要 | The problem of traffic signal control is essential but remains unsolved. Some researchers use online reinforcement learning, including the off-policy one, to derive an optimal control policy through interaction between agents and environments in simulation. However, it is difficult to deploy the policy in real transportation systems due to the gap between simulated and real traffic data. In this paper, we consider an offline reinforcement learning method to tackle the problem. First, we construct a realistic traffic environment and obtain offline data based on a classic actuated traffic signal controller. Then, we use an offline reinforcement learning algorithm, namely conservative Q-learning, to learn an efficient control policy via offline datasets. We conduct experiments on a typical road intersection and compare the conservative Q-learning policy with the actuated policy and two data-driven policies based on off-policy reinforcement learning and imitation learning. Empirical results indicate that in the offline-learning setting the conservative Q-learning policy performs significantly better than other baselines, including the actuated policy, but the other two data-driven policies perform poorly in test scenarios. |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/49936] ![]() |
专题 | 自动化研究所_复杂系统管理与控制国家重点实验室_先进控制与自动化团队 |
作者单位 | 1.The State Key Laboratory for Management and Control of Complex Systems, Institute of Automation, Chinese Academy of Sciences 2.School of Artificial Intelligence, University of Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Dai, Xingyuan,Zhao, Chen,Li, Xiaoshuang,et al. Traffic Signal Control Using Offline Reinforcement Learning[C]. 见:. Beijing. 2021-10. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。