Automatic polyp segmentation via image-level and surrounding-level context fusion deep neural network
文献类型:期刊论文
作者 | Wang, Changwei1,2![]() ![]() ![]() ![]() ![]() |
刊名 | Engineering Applications of Artificial Intelligence
![]() |
出版日期 | 2023 |
卷号 | 123期号:2023页码:106168 |
关键词 | Context information fusion Colonoscopy Polyp segmentation Image-level and surrounding-level context |
DOI | https://doi.org/10.1016/j.engappai.2023.106168 |
英文摘要 | More than 95% of colorectal cancers are gradually transformed from polyps, so regular colonoscopy polyp examination plays an important role in cancer prevention and early treatment. However, automatic polyp segmentation remains a challenging task due to the low-contrast tissue environment and the small size and variety (e.g., shape, color, texture) of polyps. In this case, the rich context information in colonoscopy images is worth exploring to address the above issues. On the one hand, the image-level context with a global receptive field can be used to enhance the discrimination between the foreground and the background to alleviate the occult and indistinguishability of polyps in colonoscopy images. On the other hand, the surrounding-level context focused on the surrounding pathological region of the polyp has more detailed features that are beneficial for polyp segmentation. Therefore, we propose a novel network named ISCNet that aims to fuse image-level and surrounding-level context information for polyp segmentation. Specifically, we first introduce the Global-Guided Context Aggregation (GGCA) module to explicitly model the foreground and background of polyp segmentation through image-level context, thereby flexibly enhancing polyp-related features and suppressing background-related features. Then, we design the Diverse Surrounding Context Focus (DSCF) module to focus on the surrounding area of the polyp to extract diverse local contexts to refine the segmentation results. Finally, we fuse the feature maps derived from these two modules so that our ISCNet can enjoy the facilitation of both the image-level and surrounding-level context information. To verify the effectiveness of our method, we conduct comprehensive experimental evaluations on three challenging datasets. The quantitative and qualitative experimental results confirm that our ISCNet outperforms current state-of-the-art methods by a large margin. Our code is available at https://github.com/vvmedical/ISCNet. |
URL标识 | 查看原文 |
源URL | [http://ir.ia.ac.cn/handle/173211/51579] ![]() |
专题 | 多模态人工智能系统全国重点实验室 模式识别国家重点实验室_三维可视计算 |
通讯作者 | Wang, Changwei |
作者单位 | 1.MAIS, Institute of Automation, Chinese Academy of Sciences, China 2.School of Artificial Intelligence, University of Chinese Academy of Sciences, China 3.School of Artificial Intelligence, Beijing University of Posts and Telecommunications, China |
推荐引用方式 GB/T 7714 | Wang, Changwei,Xu, Rongtao,Xu, Shibiao,et al. Automatic polyp segmentation via image-level and surrounding-level context fusion deep neural network[J]. Engineering Applications of Artificial Intelligence,2023,123(2023):106168. |
APA | Wang, Changwei,Xu, Rongtao,Xu, Shibiao,Meng, Weiliang,&Zhang, Xiaopeng.(2023).Automatic polyp segmentation via image-level and surrounding-level context fusion deep neural network.Engineering Applications of Artificial Intelligence,123(2023),106168. |
MLA | Wang, Changwei,et al."Automatic polyp segmentation via image-level and surrounding-level context fusion deep neural network".Engineering Applications of Artificial Intelligence 123.2023(2023):106168. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。