中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
A New Pre-Training Paradigm for Offline Multi-Agent Reinforcement Learning with Suboptimal Data

文献类型:会议论文

作者Meng Linghui1,2; Zhang Xi1,2; Xing Dengpeng1,2; Xu Bo1,2
出版日期2024-04
会议日期2024.4.14-2024.4.19
会议地点Seoul, Korea
英文摘要

Offline multi-agent reinforcement learning (MARL) with pre-training paradigm, which uses a large quantity of trajectories for offline pre-training and online deployment, has become fashionable lately. While performing well on various tasks, conventional pre-trained decision-making models based on imitation learning typically require many expert trajectories or demonstrations, which limits the development of pre-trained policies in multi-agent case. To address this problem, we propose a new setting, where a multi-agent policy is pre-trained offline using suboptimal (non-expert) data and then tested online with the expectation of high rewards. In this practical setting inspired by contrastive learning, we propose YANHUI, a simple yet effective framework utilizing a well-designed reward contrast function for multi-agent policy representation learning from a dataset including various reward-level data instead of just expert trajectories. Furthermore, we enrich the multi-agent policy pre-training with mixture-of-experts to dynamically represent it. With the same quantity of offline StarCraft Multi-Agent Challenge datasets, YANHUI achieves significant improvements over offline MARL baselines. In particular, our method surprisingly competes in performance with earlier state-of-the-art approaches, even with 10% of the expert data used by other baselines and the rest replaced by poor data.

源URL[http://ir.ia.ac.cn/handle/173211/57331]  
专题数字内容技术与服务研究中心_听觉模型与认知计算
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Meng Linghui,Zhang Xi,Xing Dengpeng,et al. A New Pre-Training Paradigm for Offline Multi-Agent Reinforcement Learning with Suboptimal Data[C]. 见:. Seoul, Korea. 2024.4.14-2024.4.19.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。