Dual Attention Network for Scene Segmentation
文献类型:会议论文
作者 | Jun Fu; Jing Liu; Haijie Tian; Yong Li; Yongjun Bao; Zhiwei Fang; Hanqing Lu; Fu, Jun![]() ![]() ![]() |
出版日期 | 2019-06 |
会议日期 | June 16-June 20, 2019 |
会议地点 | Long Beach, CA, USA |
英文摘要 | In this paper, we address the scene segmentation task by capturing rich contextual dependencies based on the self-attention mechanism. Unlike previous works that capture contexts by multi-scale feature fusion, we propose a Dual Attention Network (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the feature at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results. We achieve new state-of-theart segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset. In particular, a Mean IoU score of 81.5% on Cityscapes test set is achieved without using coarse data |
会议录 | IEEE International Conference on Computer Vision and Pattern Recognition, (CVPR2019)
![]() |
会议录出版者 | IEEE International Conference on Computer Vision and Pattern Recognition |
语种 | 英语 |
源URL | [http://ir.ia.ac.cn/handle/173211/39200] ![]() |
专题 | 自动化研究所_模式识别国家重点实验室_图像与视频分析团队 |
通讯作者 | Jing Liu; Liu, Jing |
作者单位 | 中国科学院自动化研究所 |
推荐引用方式 GB/T 7714 | Jun Fu,Jing Liu,Haijie Tian,et al. Dual Attention Network for Scene Segmentation[C]. 见:. Long Beach, CA, USA. June 16-June 20, 2019. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。