中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Adaptive Context Network for Scene Parsing

文献类型:会议论文

作者Jun Fu; Jing Liu; Yuhang Wang; Yong Li; Yongjun Bao; Jinhui Tang; Hanqing Lu; Li, Yong; Liu, Jing; Wang, Yuhang
出版日期2019-10
会议日期October 27 - November 2, 2019
会议地点Seoul, Korea
英文摘要

Recent works attempt to improve scene parsing performance by exploring different levels of contexts, and typically train a well-designed convolutional network to exploit useful contexts across all pixels equally. However, in this paper, we find that the context demands are varying from different pixels or regions in each image. Based on this observation, we propose an Adaptive Context Network (ACNet) to capture the pixel-aware contexts by a competitive fusion of global context and local context according to different per-pixel demands. Specifically, when given a pixel, the global context demand is measured by the similarity between the global feature and its local feature, whose reverse value can be used to measure the local context demand. We model the two demand measurements by the proposed global context module and local context module, respectively, to generate adaptive contextual features. Furthermore, we import multiple such modules to build several adaptive context blocks in different levels of network to obtain a coarse-to-fine result. Finally, comprehensive experimental evaluations demonstrate the effectiveness of the proposed ACNet, and new state-of-the-arts performances are achieved on all four public datasets, i.e. Cityscapes, ADE20K, PASCAL Context, and COCO Stuff
 

会议录IEEE International Conference on Computer Vision(ICCV2019)
会议录出版者IEEE International Conference on Computer Vision
语种英语
源URL[http://ir.ia.ac.cn/handle/173211/39204]  
专题自动化研究所_模式识别国家重点实验室_图像与视频分析团队
通讯作者Jing Liu; Liu, Jing
作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Jun Fu,Jing Liu,Yuhang Wang,et al. Adaptive Context Network for Scene Parsing[C]. 见:. Seoul, Korea. October 27 - November 2, 2019.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。