中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
OpenMix: Exploring Outlier Samples for Misclassification Detection

文献类型:会议论文

作者Zhu Fei (朱飞); Zhen Cheng; Xu-Yao Zhang; Cheng-Lin Liu
出版日期2023-06-18
会议日期Jun 18-22, 2023
会议地点Vancouver canada
英文摘要

Reliable confidence estimation for deep neural classifiers is a challenging yet fundamental requirement in highstakes applications. Unfortunately, modern deep neural networks are often overconfident for their erroneous predictions. In this work, we exploit the easily available outlier samples, i.e., unlabeled samples coming from non-target classes, for helping detect misclassification errors. Particularly, we find that the well-known Outlier Exposure, which is powerful in detecting out-of-distribution (OOD) samples from unknown classes, does not provide any gain in identifying misclassification errors. Based on these observations, we propose a novel method called OpenMix, which incorporates open-world knowledge by learning to reject uncertain pseudo-samples generated via outlier transformation. OpenMix significantly improves confidence reliability under various scenarios, establishing a strong and unified framework for detecting both misclassified samples from known classes and OOD samples from unknown classes. The code is publicly available at https://github. com/Impression2805/OpenMix.

会议录出版者IEEE/CVF
源URL[http://ir.ia.ac.cn/handle/173211/52407]  
专题自动化研究所_模式识别国家重点实验室_模式分析与学习团队
通讯作者Zhu Fei (朱飞)
作者单位1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
2.MAIS, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
推荐引用方式
GB/T 7714
Zhu Fei ,Zhen Cheng,Xu-Yao Zhang,et al. OpenMix: Exploring Outlier Samples for Misclassification Detection[C]. 见:. Vancouver canada. Jun 18-22, 2023.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。