中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
UNIFIED PROMPT LEARNING MAKES PRE-TRAINED LANGUAGE MODELS BETTER FEW-SHOT LEARNERS

文献类型:会议论文

作者Jin feihu1,2; Lu jinliang1,2; Zhang jiajun1,2
出版日期2023-02
会议日期2023-06-03
会议地点Rhodes Island, Greece
DOI10.1109/ICASSP49357.2023.10095738
英文摘要

Language prompting induces the model to produce a textual output during the training phase, which achieves remarkable performance in few-shot learning scenarios. However, current prompt-based methods either use the same task-specific prompts for each instance, losing the particularity of instance-dependent information, or generate an instance-dependent prompt for each instance, lacking shared information about the task. In this paper, we propose an efficient few-shot learning method to dynamically decide the degree to which task-specific and instance-dependent information are incorporated according to different task and instance characteristics, enriching the prompt with task-specific and instance-dependent information. Extensive experiments on a wide range of natural language understanding tasks demonstrate that our approach obtains significant improvements compared to prompt-based fine-tuning baselines in a few-shot setting with about 0.1% parameters tuned. Moreover, our approach outperforms existing state-of-the-art efficient few-shot learning methods on several natural language understanding tasks.

源URL[http://ir.ia.ac.cn/handle/173211/51724]  
专题模式识别国家重点实验室_自然语言处理
通讯作者Zhang jiajun
作者单位1.中国科学院大学
2.中国科学院自动化研究所
推荐引用方式
GB/T 7714
Jin feihu,Lu jinliang,Zhang jiajun. UNIFIED PROMPT LEARNING MAKES PRE-TRAINED LANGUAGE MODELS BETTER FEW-SHOT LEARNERS[C]. 见:. Rhodes Island, Greece. 2023-06-03.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。