中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
LRHP: Learning Representations for Human Preferences via Preference Pairs

文献类型:期刊论文

作者Chenglong Wang3; Yang Gan3; Yifu Huo3; Yongyu Mu3; Qiaozhi He3; Murun Yang3; Tong Xiao2,3; Chunliang Zhang2,3; Tongran Liu1; Jingbo Zhu2,3
刊名arXiv
出版日期2024
通讯作者邮箱xiaotong@mail.neu.edu.cn
DOI10.48550/arXiv.2410.04503
英文摘要

To improve human-preference alignment training, current research has developed numerous preference datasets consisting of preference pairs labeled as "preferred" or "dispreferred". These preference pairs are typically used to encode human preferences into a single numerical value through reward modeling, which acts as a reward signal during reinforcement learning from human feedback (RLHF). However, representing these human preferences as a numerical value complicates the analysis of these preferences and restricts their broader applications other than RLHF. In contrast, in this work, we introduce a preference representation learning task that aims to construct a richer and more structured representation of human preferences. We further develop a more generalizable framework, Learning Representations for Human Preferences via preference pairs (namely LRHP), which extends beyond traditional reward modeling to tackle this task. We verify the utility of preference representations in two downstream tasks: preference data selection and preference margin prediction. Building upon the human preferences in representations, we achieve strong performance in both tasks, significantly outperforming baselines.

收录类别EI
语种英语
源URL[http://ir.psych.ac.cn/handle/311026/49205]  
专题心理研究所_中国科学院行为科学重点实验室
作者单位1.CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS, Beijing, China
2.NiuTrans Research, Shenyang, China
3.School of Computer Science and Engineering, Northeastern University, Shenyang, China
推荐引用方式
GB/T 7714
Chenglong Wang,Yang Gan,Yifu Huo,et al. LRHP: Learning Representations for Human Preferences via Preference Pairs[J]. arXiv,2024.
APA Chenglong Wang.,Yang Gan.,Yifu Huo.,Yongyu Mu.,Qiaozhi He.,...&Jingbo Zhu.(2024).LRHP: Learning Representations for Human Preferences via Preference Pairs.arXiv.
MLA Chenglong Wang,et al."LRHP: Learning Representations for Human Preferences via Preference Pairs".arXiv (2024).

入库方式: OAI收割

来源:心理研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。