中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation

文献类型:期刊论文

作者Chenglong Wang3; Hang Zhou3; Yimin Hu3; Yifu Huo3; Bei Li3; Tongran Liu2; Tong Xiao1,3; jingbo Zhu1,3
刊名arXiv
出版日期2023
期号4
通讯作者邮箱xiaotong@mail.neu.edu.cn ; zhujingbo@mail.neu.edu.cn
DOI10.48550/arXiv.2308.02223
文献子类综述
英文摘要

Applying Reinforcement Learning (RL) to sequence generation models enables the direct optimization of long-term rewards (e.g., BLEU and human feedback), but typically requires large-scale sampling over a space of action sequences. This is a computational challenge as presented by the practice of sequence generation problems, such as machine translation, where we often deal with a large action space (e.g., a vocabulary) and a long action sequence (e.g., a translation). In this work, we introduce two-stage sampling and dynamic sampling approaches to improve the sampling efficiency during training sequence generation models via RL. We experiment with our approaches on the traditional sequence generation tasks, including machine translation and abstractive summarization. Furthermore, we evaluate our approaches in RL from human feedback (RLHF) through training a large language model using the reward model. Experimental results show that the efficient sampling-based RL, referred to as ESRL, can outperform all baselines in terms of both training efficiency and memory consumption. Notably, ESRL yields consistent performance gains over the strong REINFORCE, minimum risk training, and proximal policy optimization methods.

收录类别EI
语种英语
源URL[http://ir.psych.ac.cn/handle/311026/45254]  
专题心理研究所_中国科学院行为科学重点实验室
作者单位1.NiuTrans Research, Shenyang, China
2.CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS, Beijing, China
3.School of Computer Science and Engineering, Northeastern University, Shenyang, China
推荐引用方式
GB/T 7714
Chenglong Wang,Hang Zhou,Yimin Hu,et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation[J]. arXiv,2023(4).
APA Chenglong Wang.,Hang Zhou.,Yimin Hu.,Yifu Huo.,Bei Li.,...&jingbo Zhu.(2023).ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation.arXiv(4).
MLA Chenglong Wang,et al."ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation".arXiv .4(2023).

入库方式: OAI收割

来源:心理研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。