中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Logic Traps in Evaluating Attribution Scores

文献类型:会议论文

作者Ju YM(鞠一鸣); Zhang YZ(张元哲); Yang C(杨朝); Jiang ZT(江忠涛); Liu K(刘康); Zhao J(赵军)
出版日期2022-05
会议日期22nd - 27th May 2022
会议地点Dublin
页码5911–5922
英文摘要

Modern deep learning models are notoriously opaque, which has motivated the development of methods for interpreting how deep models predict.This goal is usually approached with attribution method, which assesses the influence of features on model predictions. As an explanation method, the evaluation criteria of attribution methods is how accurately it reflects the actual reasoning process of the model (faithfulness). Meanwhile, since the reasoning process of deep models is inaccessible, researchers design various evaluation methods to demonstrate their arguments.However, some crucial logic traps in these evaluation methods are ignored in most works, causing inaccurate evaluation and unfair comparison.This paper systematically reviews existing methods for evaluating attribution scores and summarizes the logic traps in these methods.We further conduct experiments to demonstrate the existence of each logic trap.Through both theoretical and experimental analysis, we hope to increase attention on the inaccurate evaluation of attribution scores. Moreover, with this paper, we suggest stopping focusing on improving performance under unreliable evaluation systems and starting efforts on reducing the impact of proposed logic traps.

语种英语
源URL[http://ir.ia.ac.cn/handle/173211/52277]  
专题模式识别国家重点实验室_自然语言处理
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
2.National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China
推荐引用方式
GB/T 7714
Ju YM,Zhang YZ,Yang C,et al. Logic Traps in Evaluating Attribution Scores[C]. 见:. Dublin. 22nd - 27th May 2022.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。