中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
A Cooperation Graph Approach for Multiagent Sparse Reward Reinforcement Learning

文献类型:会议论文

作者Qingxu Fu1,2; Tenghai Qiu1,2; Zhiqiang Pu1,2; Jianqiang Yi1,2; Wanmai Yuan1,2
出版日期2022
会议日期2022年07月
会议地点Padua, Italy
英文摘要

Multiagent reinforcement learning (MARL) can solve complex cooperative tasks. However, the efficiency of existing MARL methods relies heavily on well-defined reward functions. Multiagent tasks with sparse reward feedback are especially challenging not only because of the credit distribution problem, but also due to the low probability of obtaining positive reward feedback. In this paper, we design a graph network called Cooperation Graph (CG). The Cooperation Graph is the combination of two simple bipartite graphs, namely, the Agent Clustering subgraph (ACG) and the Cluster Designating subgraph (CDG). Next, based on this novel graph structure, we propose a Cooperation Graph Multiagent Reinforcement Learning (CG-MARL) algorithm, which can efficiently deal with the sparse reward problem in multiagent tasks. In CG-MARL, agents are directly controlled by the Cooperation Graph. And a policy neural network is trained to manipulate this Cooperation Graph, guiding agents to achieve cooperation in an implicit way. This hierarchical feature of CG-MARL provides space for customized cluster-actions, an extensible interface for introducing fundamental cooperation knowledge. In experiments, CG-MARL shows state-of-the-art performance in sparse reward multiagent benchmarks, including the anti-invasion interception task and the multi-cargo delivery task.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/57224]  
专题综合信息系统研究中心_飞行器智能技术
作者单位1.80146-中国科学院自动化研究所
2.80170-中国科学院大学
推荐引用方式
GB/T 7714
Qingxu Fu,Tenghai Qiu,Zhiqiang Pu,et al. A Cooperation Graph Approach for Multiagent Sparse Reward Reinforcement Learning[C]. 见:. Padua, Italy. 2022年07月.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。