中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Generative Calibration for In-context Learning

文献类型:会议论文

作者Zhongtao Jiang1,4; Yuanzhe Zhang1,4; Cao Liu3; Jun Zhao1,4; Kang Liu1,2,4
出版日期2023-10-06
会议日期2023-10-6
会议地点Singapore
英文摘要

As one of the most exciting features of large language models (LLMs), in-context learning is a mixed blessing. While it allows users to fast-prototype a task solver with only a few training examples, the performance is generally sensitive to various configurations of the prompt such as the choice or order of the training examples. In this paper, we for the first time theoretically and empirically identify that such a paradox is mainly due to the label shift of the in-context model to the data distribution, in which LLMs shift the label marginal p(y) while having a good label conditional p(x|y). With this understanding, we can simply calibrate the in-context predictive distribution by adjusting the label marginal, which is estimated via Monte-Carlo sampling over the in-context model, i.e., generation of LLMs. We call our approach as generative calibration. We conduct exhaustive experiments with 12 text classification tasks and 12 LLMs scaling from 774M to 33B, generally find that the proposed method greatly and consistently outperforms the ICL as well as state-of-the-art calibration methods, by up to 27% absolute in macro-F1. Meanwhile, the proposed method is also stable under different prompt configurations.

源URL[http://ir.ia.ac.cn/handle/173211/57263]  
专题复杂系统认知与决策实验室
作者单位1.The Laboratory of Cognition and Decision Intelligence for Complex Systems, Institute of Automation, Chinese Academy of Sciences
2.Shanghai Artificial Intelligence Laboratory
3.Meituan
4.School of Artificial Intelligence, University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Zhongtao Jiang,Yuanzhe Zhang,Cao Liu,et al. Generative Calibration for In-context Learning[C]. 见:. Singapore. 2023-10-6.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。