Multiattention Network for Semantic Segmentation of Fine-Resolution Remote Sensing Images
文献类型:期刊论文
作者 | Li, Rui2; Zheng, Shunyi2; Zhang, Ce3,4; Duan, Chenxi5; Su, Jianlin6; Wang, Libo2; Atkinson, Peter M.1,3,7 |
刊名 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
![]() |
出版日期 | 2021-07-14 |
页码 | 13 |
关键词 | Semantics Image segmentation Feature extraction Remote sensing Task analysis Kernel Complexity theory Attention mechanism fine-resolution remote sensing images semantic segmentation |
ISSN号 | 0196-2892 |
DOI | 10.1109/TGRS.2021.3093977 |
通讯作者 | Duan, Chenxi(chenxiduan@whu.edu.cn) |
英文摘要 | Semantic segmentation of remote sensing images plays an important role in a wide range of applications, including land resource management, biosphere monitoring, and urban planning. Although the accuracy of semantic segmentation in remote sensing images has been increased significantly by deep convolutional neural networks, several limitations exist in standard models. First, for encoder-decoder architectures such as U-Net, the utilization of multiscale features causes the underuse of information, where low-level features and high-level features are concatenated directly without any refinement. Second, long-range dependencies of feature maps are insufficiently explored, resulting in suboptimal feature representations associated with each semantic class. Third, even though the dot-product attention mechanism has been introduced and utilized in semantic segmentation to model long-range dependencies, the large time and space demands of attention impede the actual usage of attention in application scenarios with large-scale input. This article proposed a multiattention network (MANet) to address these issues by extracting contextual dependencies through multiple efficient attention modules. A novel attention mechanism of kernel attention with linear complexity is proposed to alleviate the large computational demand in attention. Based on kernel attention and channel attention, we integrate local feature maps extracted by ResNet-50 with their corresponding global dependencies and reweight interdependent channel maps adaptively. Numerical experiments on two large-scale fine-resolution remote sensing datasets demonstrate the superior performance of the proposed MANet. Code is available at https://github.com/lironui/Multi-Attention-Network. |
WOS关键词 | DIFFERENCE WATER INDEX ; LAND-COVER ; ATTENTION ; NDWI |
资助项目 | National Natural Science Foundation of China[41671452] |
WOS研究方向 | Geochemistry & Geophysics ; Engineering ; Remote Sensing ; Imaging Science & Photographic Technology |
语种 | 英语 |
WOS记录号 | WOS:000732870100001 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
资助机构 | National Natural Science Foundation of China |
源URL | [http://ir.igsnrr.ac.cn/handle/311030/168740] ![]() |
专题 | 中国科学院地理科学与资源研究所 |
通讯作者 | Duan, Chenxi |
作者单位 | 1.Univ Southampton, Geog & Environm Sci, Southampton SO17 1BJ, Hants, England 2.Wuhan Univ, Sch Remote Sensing & Informat Engn, Wuhan 430079, Peoples R China 3.Univ Lancaster, Lancaster Environm Ctr, Lancaster LA1 4YQ, England 4.UK Ctr Ecol & Hydrol, Lancaster LA1 4AP, England 5.Wuhan Univ, State Key Lab Informat Engn Surveying Mapping & R, Wuhan 430079, Peoples R China 6.Shenzhen Zhuiyi Technol Co Ltd, Shenzhen 518054, Peoples R China 7.Chinese Acad Sci, Inst Geog Sci & Nat Resources Res, Beijing 100101, Peoples R China |
推荐引用方式 GB/T 7714 | Li, Rui,Zheng, Shunyi,Zhang, Ce,et al. Multiattention Network for Semantic Segmentation of Fine-Resolution Remote Sensing Images[J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,2021:13. |
APA | Li, Rui.,Zheng, Shunyi.,Zhang, Ce.,Duan, Chenxi.,Su, Jianlin.,...&Atkinson, Peter M..(2021).Multiattention Network for Semantic Segmentation of Fine-Resolution Remote Sensing Images.IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING,13. |
MLA | Li, Rui,et al."Multiattention Network for Semantic Segmentation of Fine-Resolution Remote Sensing Images".IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING (2021):13. |
入库方式: OAI收割
来源:地理科学与资源研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。