Hybrid Alignment Training for Large Language Models
文献类型:期刊论文
作者 | Chenglong Wang3; Hang Zhou3; Kaiyan Chang3; Bei Li3; Yongyu Mu3; Tong Xiao1,3; Tongran Liu2![]() |
刊名 | arXiv
![]() |
出版日期 | 2024 |
通讯作者邮箱 | tong xiao |
DOI | Alignment training is crucial for enabling large language models (LLMs) to cater to human intentions and preferences. It is typically performed based on two stages with different objectives: instruction-following alignment and human-preference alignment. However, aligning LLMs with these objectives in sequence suffers from an inherent problem: the objectives may conflict, and the LLMs cannot guarantee to simultaneously align with the instructions and human preferences well. To response to these, in this work, we propose a Hybrid Alignment Training (HBAT) approach, based on alternating alignment and modified elastic weight consolidation methods. The basic idea is to alternate between different objectives during alignment training, so that better collaboration can be achieved between the two alignment tasks. We experiment with HBAT on summarization and dialogue tasks. Experimental results show that the proposed HBAT can significantly outperform all baselines. Notably, HBAT yields consistent performance gains over the traditional two-stage alignment training when using both proximal policy optimization and direct preference optimization. |
文献子类 | 综述 |
英文摘要 | Alignment training is crucial for enabling large language models (LLMs) to cater to human intentions and preferences. It is typically performed based on two stages with different objectives: instruction-following alignment and human-preference alignment. However, aligning LLMs with these objectives in sequence suffers from an inherent problem: the objectives may conflict, and the LLMs cannot guarantee to simultaneously align with the instructions and human preferences well. To response to these, in this work, we propose a Hybrid Alignment Training (HBAT) approach, based on alternating alignment and modified elastic weight consolidation methods. The basic idea is to alternate between different objectives during alignment training, so that better collaboration can be achieved between the two alignment tasks. We experiment with HBAT on summarization and dialogue tasks. Experimental results show that the proposed HBAT can significantly outperform all baselines. Notably, HBAT yields consistent performance gains over the traditional two-stage alignment training when using both proximal policy optimization and direct preference optimization. |
收录类别 | EI |
语种 | 英语 |
源URL | [http://ir.psych.ac.cn/handle/311026/48279] ![]() |
专题 | 心理研究所_中国科学院行为科学重点实验室 |
作者单位 | 1.NiuTrans Research, Shenyang, China 2.CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS, Beijing, China 3.School of Computer Science and Engineering, Northeastern University, Shenyang, China |
推荐引用方式 GB/T 7714 | Chenglong Wang,Hang Zhou,Kaiyan Chang,et al. Hybrid Alignment Training for Large Language Models[J]. arXiv,2024. |
APA | Chenglong Wang.,Hang Zhou.,Kaiyan Chang.,Bei Li.,Yongyu Mu.,...&Jingbo Zhu.(2024).Hybrid Alignment Training for Large Language Models.arXiv. |
MLA | Chenglong Wang,et al."Hybrid Alignment Training for Large Language Models".arXiv (2024). |
入库方式: OAI收割
来源:心理研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。