中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Zero-shot language extension for dialogue state tracking via pre-trained models and multi-auxiliary-tasks fine-tuning

文献类型:期刊论文

作者Xiang, Lu2,3; Zhao, Yang2,3; Zhu, Junnan2,3; Zhou, Yu1,2,3; Zong, Chengqing2,3
刊名KNOWLEDGE-BASED SYSTEMS
出版日期2023-01-10
卷号259页码:14
ISSN号0950-7051
关键词Dialogue state tracking Zero -shot language extension Multilingual DST Pre -trained models Multi -auxiliary -tasks fine-tuning
DOI10.1016/j.knosys.2022.110015
通讯作者Zhou, Yu(yzhou@nlpr.ia.ac.cn)
英文摘要Dialogue state tracking (DST), a crucial component of the task-oriented dialogue system (TOD), is designed to track the user's goal. Existing DST models mainly focus on monolingual dialogue input, failing to meet the growing needs of a TOD to provide multilingual services. Therefore, this paper proposes a novel Zero-shot Language Extension scenario for DST, extending the monolingual DST to multilingual DST without extra high-cost dialogue data annotation. In this scenario, the multilingual DST only needs a single shared model to handle multilingual input and generate a unified dialogue state. This setting makes deploying a complete multilingual TOD easy since it could be reused by the downstream components from existing monolingual TOD. Specifically, we achieve the language extension by multi-auxiliary-tasks fine-tuning of multilingual pre-trained models, where five relevant auxiliary tasks are jointly designed, including monolingual DST, cross-lingual DST, forward word translation, utterance recovery, and semantic similarity. The extended multilingual DST model can be enhanced through joint optimization with all the auxiliary tasks by capturing multilingual context understanding and cross-lingual alignment characteristics. Comprehensive experiments on Multilingual WOZ dataset (English -> German and English -> Italian) and cross-lingual MultiWOZ dataset (English -> Chinese and Chinese -> English) demonstrate the effectiveness and superiority of the proposed method.(c) 2022 Elsevier B.V. All rights reserved.
资助项目National Key R&D Program of China ; [2020AAA0108600]
WOS研究方向Computer Science
语种英语
出版者ELSEVIER
WOS记录号WOS:000883002100001
资助机构National Key R&D Program of China
源URL[http://ir.ia.ac.cn/handle/173211/51248]  
专题多模态人工智能系统全国重点实验室
通讯作者Zhou, Yu
作者单位1.Zhongke Fanyu Technol Co Ltd, Fanyu AI Lab, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing, Peoples R China
3.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Xiang, Lu,Zhao, Yang,Zhu, Junnan,et al. Zero-shot language extension for dialogue state tracking via pre-trained models and multi-auxiliary-tasks fine-tuning[J]. KNOWLEDGE-BASED SYSTEMS,2023,259:14.
APA Xiang, Lu,Zhao, Yang,Zhu, Junnan,Zhou, Yu,&Zong, Chengqing.(2023).Zero-shot language extension for dialogue state tracking via pre-trained models and multi-auxiliary-tasks fine-tuning.KNOWLEDGE-BASED SYSTEMS,259,14.
MLA Xiang, Lu,et al."Zero-shot language extension for dialogue state tracking via pre-trained models and multi-auxiliary-tasks fine-tuning".KNOWLEDGE-BASED SYSTEMS 259(2023):14.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。