Filtered Observations for Model-Based Multi-agent Reinforcement Learning
文献类型:会议论文
作者 | Meng Linghui1,2![]() ![]() ![]() ![]() ![]() |
出版日期 | 2023-09 |
会议日期 | 2023.9.18-2023.9.22 |
会议地点 | Turin, Italy |
英文摘要 | Reinforcement learning (RL) pursues high sample efficiency in practical environments to avoid costly interactions. Learning to plan with a world model in a compact latent space for policy optimization significantly improves sample efficiency in single-agent RL. Although world model construction methods for single-agent can be naturally extended, existing multi-agent schemes fail to acquire world models effectively as redundant information increases rapidly with the number of agents. To address this issue, we in this paper leverage guided diffusion to filter this noisy information, which harms teamwork. Obtained purified global states are then used to build a unified world model. Based on the learned world model, we denoise each agent observation and plan for multi-agent policy optimization, facilitating efficient cooperation. We name our method UTOPIA, a model-based method for cooperative multi-agent reinforcement learning (MARL). Compared to strong model-free and model-based baselines, our method shows enhanced sample efficiency in various testbeds, including the challenging StarCraft Multi-Agent Challenge tasks. |
源URL | [http://ir.ia.ac.cn/handle/173211/57332] ![]() |
专题 | 数字内容技术与服务研究中心_听觉模型与认知计算 |
通讯作者 | Xing Dengpeng; Xu Bo |
作者单位 | 1.School of Artificial Intelligence, University of Chinese Academy of Sciences 2.Institute of Automation, Chinese Academy of Sciences |
推荐引用方式 GB/T 7714 | Meng Linghui,Xiong Xuantang,Zang Yifan,et al. Filtered Observations for Model-Based Multi-agent Reinforcement Learning[C]. 见:. Turin, Italy. 2023.9.18-2023.9.22. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。