Self-Supervised learning for Conversational Recommendation
文献类型:期刊论文
作者 | Li, Shuokai3,8; Xie, Ruobing2; Zhu, Yongchun3,8; Zhuang, Fuzhen6,7; Tang, Zhenwei1; Zhao, Wayne Xin4,5; He, Qing3,8 |
刊名 | INFORMATION PROCESSING & MANAGEMENT
![]() |
出版日期 | 2022-11-01 |
卷号 | 59期号:6页码:19 |
关键词 | Conversational recommender system Self-supervised learning Knowledge |
ISSN号 | 0306-4573 |
DOI | 10.1016/j.ipm.2022.103067 |
英文摘要 | Conversational recommender system (CRS) aims to model user preference through interactive conversations. Although there are some works, they still have two drawbacks: (1) they rely on large amounts of training data and suffer from data sparsity problem; and (2) they do not fully leverage different types of knowledge extracted from dialogues. To address these issues in CRS, we explore the intrinsic correlations of different types of knowledge by self-supervised learning, and propose the model SSCR, which stands for Self-Supervised learning for Conversational Recommendation. The main idea is to jointly consider both the semantic and structural knowledge via three self-supervision signals in both recommendation and dialogue modules. First, we carefully design two auxiliary self-supervised objectives: token-level task and sentence-level task, to explore the semantic knowledge. Then, we extract the structural knowledge based on external knowledge graphs from user mentioned entities. Finally, we model the inter-information between the semantic and structural knowledge with the advantages of contrastive learning. As existing similarity functions fail to achieve this goal, we propose a novel similarity function based on negative log-likelihood loss. Comprehensive experimental results on two real-world CRS datasets (including both English and Chinese with about 10,000 dialogues) show the superiority of our proposed method. Concretely, in recommendation, SSCR gets an improvement about 5% similar to 15% compared with state-of-the-art baselines on hit rate, mean reciprocal rank and normalized discounted cumulative gain. In dialogue generation, SSCR outperforms baselines on both automatic evaluations (distinct n-gram, BLEU and perplexity) and human evaluations (fluency and informativeness). |
资助项目 | National Natural Science Foundation of China[61976204] ; National Natural Science Foundation of China[U1811461] ; National Natural Science Foundation of China[62176014] ; National Natural Science Foundation of China[U1836206] |
WOS研究方向 | Computer Science ; Information Science & Library Science |
语种 | 英语 |
WOS记录号 | WOS:000861188400006 |
出版者 | ELSEVIER SCI LTD |
源URL | [http://119.78.100.204/handle/2XEOYT63/19824] ![]() |
专题 | 中国科学院计算技术研究所期刊论文 |
通讯作者 | Zhuang, Fuzhen; He, Qing |
作者单位 | 1.Univ Toronto, Toronto, ON, Canada 2.Tencent, WeChat Search Applicat Dept, Shenzhen, Peoples R China 3.Univ Chinese Acad Sci, Beijing 100049, Peoples R China 4.Beijing Acad Artificial Intelligence, Beijing Key Lab Big Data Management & Anal Methods, Beijing, Peoples R China 5.Renmin Univ, Gaoling Sch Artificial Intelligence, Beijing, Peoples R China 6.Beihang Univ, Sch Comp Sci, SKLSDE, Beijing 100191, Peoples R China 7.Beihang Univ, Inst Artificial Intelligence, Beijing 100191, Peoples R China 8.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China |
推荐引用方式 GB/T 7714 | Li, Shuokai,Xie, Ruobing,Zhu, Yongchun,et al. Self-Supervised learning for Conversational Recommendation[J]. INFORMATION PROCESSING & MANAGEMENT,2022,59(6):19. |
APA | Li, Shuokai.,Xie, Ruobing.,Zhu, Yongchun.,Zhuang, Fuzhen.,Tang, Zhenwei.,...&He, Qing.(2022).Self-Supervised learning for Conversational Recommendation.INFORMATION PROCESSING & MANAGEMENT,59(6),19. |
MLA | Li, Shuokai,et al."Self-Supervised learning for Conversational Recommendation".INFORMATION PROCESSING & MANAGEMENT 59.6(2022):19. |
入库方式: OAI收割
来源:计算技术研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。