Sound Active Attention Framework for Remote Sensing Image Captioning
文献类型:期刊论文
作者 | Lu, Xiaoqiang2![]() ![]() |
刊名 | IEEE Transactions on Geoscience and Remote Sensing
![]() |
出版日期 | 2020-03 |
卷号 | 58期号:3页码:1985-2000 |
关键词 | Active attention remote sensing image captioning semantic understanding |
ISSN号 | 01962892;15580644 |
DOI | 10.1109/TGRS.2019.2951636 |
产权排序 | 1 |
英文摘要 | Attention mechanism-based image captioning methods have achieved good results in the remote sensing field, but are driven by tagged sentences, which is called passive attention. However, different observers may give different levels of attention to the same image. The attention of observers during testing, then, may not be consistent with the attention during training. As a direct and natural human-machine interaction, speech is much faster than typing sentences. Sound can represent the attention of different observers. This is called active attention. Active attention can be more targeted to describe the image; for example, in disaster assessments, the situation can be obtained quickly and the corresponding disaster areas can be located related to the specific disaster. A novel sound active attention framework is proposed for more specific caption generation according to the interest of the observer. First, sound is modeled by mel-frequency cepstral coefficients (MFCCs) and the image is encoded by convolutional neural networks (CNNs). Then, to handle the continuity characteristic of sound, a sound module and an attention module are designed based on the gated recurrent units (GRUs). Finally, the sound-guided image feature processed by the attention module is imported into the output module to generate descriptive sentence. Experiments based on both fake and real sound data sets show that the proposed method can generate sentences that can capture the focus of human. © 1980-2012 IEEE. |
语种 | 英语 |
WOS记录号 | WOS:000519598700037 |
出版者 | Institute of Electrical and Electronics Engineers Inc. |
源URL | [http://ir.opt.ac.cn/handle/181661/93309] ![]() |
专题 | 西安光学精密机械研究所_光学影像学习与分析中心 |
通讯作者 | Lu, Xiaoqiang |
作者单位 | 1.School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing; 100049, China 2.Key Laboratory of Spectral Imaging Technology CAS, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an; 710119, China; |
推荐引用方式 GB/T 7714 | Lu, Xiaoqiang,Wang, Binqiang,Zheng, Xiangtao. Sound Active Attention Framework for Remote Sensing Image Captioning[J]. IEEE Transactions on Geoscience and Remote Sensing,2020,58(3):1985-2000. |
APA | Lu, Xiaoqiang,Wang, Binqiang,&Zheng, Xiangtao.(2020).Sound Active Attention Framework for Remote Sensing Image Captioning.IEEE Transactions on Geoscience and Remote Sensing,58(3),1985-2000. |
MLA | Lu, Xiaoqiang,et al."Sound Active Attention Framework for Remote Sensing Image Captioning".IEEE Transactions on Geoscience and Remote Sensing 58.3(2020):1985-2000. |
入库方式: OAI收割
来源:西安光学精密机械研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。