中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
MAPNet: Multi-modal attentive pooling network for RGB-D indoor scene classification

文献类型:期刊论文

作者Li, Yabei1,2,3; Zhang, Zhang1,2,3; Cheng, Yanhua4; Wang, Liang1,2,3,5; Tan, Tieniu1,2,3,5
刊名PATTERN RECOGNITION
出版日期2019-06-01
卷号90页码:436-449
关键词Indoor scene classification Multi-modal fusion RGB-D Attentive pooling
ISSN号0031-3203
DOI10.1016/j.patcog.2019.02.005
通讯作者Zhang, Zhang(zzhang@nlpr.ia.ac.cn)
英文摘要RGB-D indoor scene classification is an essential and challenging task. Although convolutional neural network (CNN) achieves excellent results on RGB-D object recognition, it has several limitations when extended towards RGB-D indoor scene classification. 1) The semantic cues such as objects of the indoor scene have high spatial variabilities. The spatially rigid global representation from CNN is suboptimal. 2) The cluttered indoor scene has lots of redundant and noisy semantic cues; thus discerning discriminative information among them should not be ignored. 3) Directly concatenating or summing global RGB and Depth information as presented in popular methods cannot fully exploit the complementarity between two modalities for complicated indoor scenarios. To address the above problems, we propose a novel unified framework named Multi-modal Attentive Pooling Network (MAPNet) in this paper. Two orderless attentive pooling blocks are constructed in MAPNet to aggregate semantic cues within and between modalities meanwhile maintain the spatial invariance. The Intra-modality Attentive Pooling (IAP) block aims to mine and pool discriminative semantic cues in each modality. The Cross-modality Attentive Pooling (CAP) block is extended to learn different contributions across two modalities, which further guides the pooling of the selected discriminative semantic cues of each modality. We further show that the proposed model is interpretable, which helps to understand mechanisms of both scene classification and multi-modal fusion in MAPNet. Extensive experiments and analysis on SUN RGB-D Dataset and NYU Depth Dataset V2 show the superiority of MAPNet over current state-of-the-art methods. (C) 2019 Elsevier Ltd. All rights reserved.
WOS关键词IMAGE FEATURES
WOS研究方向Computer Science ; Engineering
语种英语
WOS记录号WOS:000463130400036
出版者ELSEVIER SCI LTD
源URL[http://ir.ia.ac.cn/handle/173211/23477]  
专题自动化研究所_智能感知与计算研究中心
通讯作者Zhang, Zhang
作者单位1.CASIA, CRIPAC, Beijing, Peoples R China
2.CASIA, NLPR, Beijing, Peoples R China
3.Univ Chinese Acad Sci, Beijing, Peoples R China
4.Tencent WeChat AI, Beijing, Peoples R China
5.CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Li, Yabei,Zhang, Zhang,Cheng, Yanhua,et al. MAPNet: Multi-modal attentive pooling network for RGB-D indoor scene classification[J]. PATTERN RECOGNITION,2019,90:436-449.
APA Li, Yabei,Zhang, Zhang,Cheng, Yanhua,Wang, Liang,&Tan, Tieniu.(2019).MAPNet: Multi-modal attentive pooling network for RGB-D indoor scene classification.PATTERN RECOGNITION,90,436-449.
MLA Li, Yabei,et al."MAPNet: Multi-modal attentive pooling network for RGB-D indoor scene classification".PATTERN RECOGNITION 90(2019):436-449.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。