中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Learning with Average Top-k Loss

文献类型:会议论文

作者Fan, Yanbo1,3,4; Lyu, Siwei1; Ying, Yiming2; Hu, Bao-Gang3,4
出版日期2017
会议日期2017
会议地点Long Beach, CA, USA
关键词Supervised Learning Aggregate Loss Average Top-k
英文摘要
In this work, we introduce the average top-k (ATk) loss as a new aggregate loss for supervised learning, which is the average over the k largest individual losses over a training dataset. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss, but can combine their advantages and mitigate their drawbacks to better adapt to different data distributions. Furthermore, it remains a convex function over all individual losses, which can lead to convex optimization problems that can be solved effectively with conventional gradient-based methods. We provide an intuitive interpretation of the ATk loss based on its equivalent effect on the continuous individual loss functions, suggesting that it can reduce the penalty on correctly classified data. We further give a learning theory analysis of MATk learning on the classification calibration of the ATk loss and the error bounds of ATk-SVM. We demonstrate the applicability of minimum average top-k learning for binary classification and regression using synthetic and real datasets.
源URL[http://ir.ia.ac.cn/handle/173211/19993]  
专题自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队
通讯作者Lyu, Siwei
作者单位1.Department of Computer Science, University at Albany, SUNY
2.Department of Mathematics and Statistics, University at Albany, SUNY
3.National Laboratory of Pattern Recognition, CASIA
4.University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Fan, Yanbo,Lyu, Siwei,Ying, Yiming,et al. Learning with Average Top-k Loss[C]. 见:. Long Beach, CA, USA. 2017.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。