Narrowing the Gap: Improved Detector Training With Noisy Location Annotations
文献类型:期刊论文
作者 | Wang, Shaoru1,2![]() ![]() ![]() ![]() |
刊名 | IEEE TRANSACTIONS ON IMAGE PROCESSING
![]() |
出版日期 | 2022 |
卷号 | 31页码:6369-6380 |
关键词 | Annotations Noise measurement Detectors Task analysis Training Object detection Degradation Object detection noisy label Bayesian estimation teacher-student learning |
ISSN号 | 1057-7149 |
DOI | 10.1109/TIP.2022.3211468 |
通讯作者 | Gao, Jin(jin.gao@nlpr.ia.ac.cn) |
英文摘要 | Deep learning methods require massive of annotated data for optimizing parameters. For example, datasets attached with accurate bounding box annotations are essential for modern object detection tasks. However, labeling with such pixel-wise accuracy is laborious and time-consuming, and elaborate labeling procedures are indispensable for reducing man-made noise, involving annotation review and acceptance testing. In this paper, we focus on the impact of noisy location annotations on the performance of object detection approaches and aim to, on the user side, reduce the adverse effect of the noise. First, noticeable performance degradation is experimentally observed for both one-stage and two-stage detectors when noise is introduced to the bounding box annotations. For instance, our synthesized noise results in performance decrease from 38.9% AP to 33.6% AP for FCOS detector on COCO test split, and 37.8%AP to 33.7%AP for Faster R-CNN. Second, a self-correction technique based on a Bayesian filter for prediction ensemble is proposed to better exploit the noisy location annotations following a Teacher-Student learning paradigm. Experiments for both synthesized and real-world scenarios consistently demonstrate the effectiveness of our approach, e.g., our method increases the degraded performance of the FCOS detector from 33.6% AP to 35.6% AP on COCO. |
资助项目 | National Key Research and Development Program of China[2020AAA0140003] ; Beijing Natural Science Foundation[JQ22014] ; Beijing Natural Science Foundation[L223003] ; Natural Science Foundation of China[61972394] ; Natural Science Foundation of China[62036011] ; Natural Science Foundation of China[61721004] ; Key Research Program of Frontier Sciences, Chinese Academy of Sciences (CAS)[QYZDJ-SSW-JSC040] ; Youth Innovation Promotion Association, CAS |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:000870290900001 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
资助机构 | National Key Research and Development Program of China ; Beijing Natural Science Foundation ; Natural Science Foundation of China ; Key Research Program of Frontier Sciences, Chinese Academy of Sciences (CAS) ; Youth Innovation Promotion Association, CAS |
源URL | [http://ir.ia.ac.cn/handle/173211/50305] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_视频内容安全团队 |
通讯作者 | Gao, Jin |
作者单位 | 1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100190, Peoples R China 2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China 3.Chinese Acad Sci, CAS Ctr Excellence Brain Sci & Intelligence Techn, Shanghai 200031, Peoples R China |
推荐引用方式 GB/T 7714 | Wang, Shaoru,Gao, Jin,Li, Bing,et al. Narrowing the Gap: Improved Detector Training With Noisy Location Annotations[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2022,31:6369-6380. |
APA | Wang, Shaoru,Gao, Jin,Li, Bing,&Hu, Weiming.(2022).Narrowing the Gap: Improved Detector Training With Noisy Location Annotations.IEEE TRANSACTIONS ON IMAGE PROCESSING,31,6369-6380. |
MLA | Wang, Shaoru,et al."Narrowing the Gap: Improved Detector Training With Noisy Location Annotations".IEEE TRANSACTIONS ON IMAGE PROCESSING 31(2022):6369-6380. |
入库方式: OAI收割
来源:自动化研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。