中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Semi- and Weakly- Supervised Semantic Segmentation with Deep Convolutional Neural Networks

文献类型:会议论文

作者Yuhang Wang; Jing Liu; Yong Li; Hanqing Lu
出版日期2015
会议日期October 26 - 30, 2015
会议地点Brisbane, Australia
关键词Cnn Semantic Segmentation Semi-supervised Learning
英文摘要Successful semantic segmentation methods typically rely on the training datasets containing a large number of pixel-wise labeled images. To alleviate the dependence on such a fully annotated training dataset, in this paper, we propose a semi- and weakly-supervised learning framework by exploring images most only with image-level labels and very few with pixel-level labels, in which two stages of Convolutional Neural Network (CNN) training are included. First, a pixel-level supervised CNN is trained on very few fully annotated images. Second, given a large number of images with only image-level labels available, a collaborative-supervised CNN is designed to jointly perform the pixel-level and image-level classification tasks, while the pixel-level labels are predicted by the fully-supervised network in the first stage. The collaborative-supervised network can remain the discriminative ability of the fully-supervised model learned with fully labeled images, and further enhance the performance by importing more weakly labeled data. Our experiments on two challenging datasets, i.e, PASCAL VOC 2007 and LabelMe LMO, demonstrate the satisfactory performance of our approach, nearly matching the results achieved when all training images have pixel-level labels.
会议录Proceedings of the 23rd Annual ACM Conference on Multimedia Conference
源URL[http://ir.ia.ac.cn/handle/173211/13444]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
通讯作者Jing Liu
作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Yuhang Wang,Jing Liu,Yong Li,et al. Semi- and Weakly- Supervised Semantic Segmentation with Deep Convolutional Neural Networks[C]. 见:. Brisbane, Australia. October 26 - 30, 2015.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。