中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
DGL-RSIS: Decoupling global spatial context and local class semantics for training-free remote sensing image segmentation

文献类型:期刊论文

作者Li, Boyi1; Zhang, Ce1; Timmerman, Richard M.1; Bao, Wenxuan2
刊名INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION
出版日期2026-02-01
卷号146页码:105113
关键词Vision language model Open-vocabulary semantic segmentation Referring expression segmentation Domain knowledge Training-free
ISSN号1569-8432
DOI10.1016/j.jag.2026.105113
产权排序2
文献子类Article
英文摘要The emergence of vision language models (VLMs) bridges the gap between vision and language, enabling multimodal understanding beyond traditional visual-only deep learning models. However, transferring VLMs from the natural image domain to remote sensing (RS) segmentation remains challenging due to the large domain gap and the diversity of RS inputs across tasks, particularly in open-vocabulary semantic segmentation (OVSS) and referring expression segmentation (RES). Here, we propose a training-free unified framework, termed DGL-RSIS, which decouples visual and textual representations and performs visual-language alignment at both local semantic and global contextual levels. Specifically, a Global-Local Decoupling (GLD) module decomposes textual inputs into local semantic tokens and global contextual tokens, while image inputs are partitioned into class-agnostic mask proposals. Then, a Local Visual-Textual Alignment (LVTA) module adaptively extracts context-aware visual features from the mask proposals and enriches textual features through knowledgeguided prompt engineering, achieving OVSS from a local perspective. Furthermore, a Global Visual-Textual Alignment (GVTA) module employs a global-enhanced Grad-CAM mechanism to capture contextual cues for referring expressions, followed by a mask selection module that integrates pixel-level activations into mask-level segmentation outputs, thereby achieving RES from a global perspective. Experiments on the iSAID (OVSS) and RRSIS-D (RES) benchmarks demonstrate that DGL-RSIS outperforms existing training-free approaches. Ablation studies further validate the effectiveness of each module. To the best of our knowledge, this is the first unified training-free framework for RS image segmentation, which effectively transfers the semantic capability of VLMs trained on natural images to the RS domain without additional training.
URL标识查看原文
WOS研究方向Physical Geography ; Remote Sensing
语种英语
WOS记录号WOS:001677301100002
出版者ELSEVIER
源URL[http://ir.igsnrr.ac.cn/handle/311030/220912]  
专题中国科学院地理科学与资源研究所
通讯作者Zhang, Ce
作者单位1.Univ Bristol, Sch Geog Sci, Univ Rd, Bristol BS8 1SS, England;
2.Chinese Acad Sci, Inst Geog Sci & Nat Resources Res, Beijing 100101, Peoples R China
推荐引用方式
GB/T 7714
Li, Boyi,Zhang, Ce,Timmerman, Richard M.,et al. DGL-RSIS: Decoupling global spatial context and local class semantics for training-free remote sensing image segmentation[J]. INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION,2026,146:105113.
APA Li, Boyi,Zhang, Ce,Timmerman, Richard M.,&Bao, Wenxuan.(2026).DGL-RSIS: Decoupling global spatial context and local class semantics for training-free remote sensing image segmentation.INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION,146,105113.
MLA Li, Boyi,et al."DGL-RSIS: Decoupling global spatial context and local class semantics for training-free remote sensing image segmentation".INTERNATIONAL JOURNAL OF APPLIED EARTH OBSERVATION AND GEOINFORMATION 146(2026):105113.

入库方式: OAI收割

来源:地理科学与资源研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。