Infrared and visible image fusion based on variational auto-encoder and infrared feature compensation
文献类型:期刊论文
作者 | Ren, Long1,2,3![]() ![]() ![]() |
刊名 | Infrared Physics and Technology
![]() |
出版日期 | 2021-09 |
卷号 | 117 |
关键词 | Image fusion Variational auto-encoder Feature compensation Convolutional neural network |
ISSN号 | 13504495 |
DOI | 10.1016/j.infrared.2021.103839 |
产权排序 | 1 |
英文摘要 | With high sensitivity to capture rich details, visible imaging equipment can take images containing more textures and contours which are important to visual perception. Unlike visible cameras, infrared imaging devices can detect targets invisible in visible images, because the imaging principle of infrared sensors derives from differences of thermal radiation. Thus, the purpose of image fusion is to merge as much meaningful feature information from the infrared and visible images into the fused image as possible, such as contours as well as textures of the visible image and thermal targets of the infrared image. In this paper, we propose an image fusion network based on variational auto-encoder (VAE), which performs the image fusion process in deep hidden layers. We divide the proposed network into image fusion network and infrared feature compensation network. Firstly, in the image fusion network, the encoder of the image fusion network is created to generate the latent vectors in hidden layers from the input visible image and infrared image. Secondly, two different latent vectors merge into one based on the product of Gaussian probability density; accordingly, the decoder begins to reconstruct the fused image with the descent of the loss function value. Meanwhile, Residual block and symmetric skip connection methods are added to the network to enhance the efficiency of network training. Finally, due to the defect of the loss function setting in the fusion network, an infrared feature compensation network is designed to compensate critical radiation features of the infrared image. Experimental results on public available datasets demonstrate that the proposed method is superior to other traditional and deep learning methods in both objective metrics and subjective visual perception. © 2021 |
语种 | 英语 |
WOS记录号 | WOS:000691626400004 |
出版者 | Elsevier B.V. |
源URL | [http://ir.opt.ac.cn/handle/181661/95003] ![]() |
专题 | 西安光学精密机械研究所_动态光学成像研究室 |
通讯作者 | Ren, Long |
作者单位 | 1.University of Chinese Academy of Sciences, No.19(A) Yuquan Road, Shijingshan District, Beijing; 100049, China 2.Faculty of Electronics and Communications of Xi'an Jiaotong University, Xi'an; 710049, China; 3.Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an; 710119, China; |
推荐引用方式 GB/T 7714 | Ren, Long,Pan, Zhibin,Cao, Jianzhong,et al. Infrared and visible image fusion based on variational auto-encoder and infrared feature compensation[J]. Infrared Physics and Technology,2021,117. |
APA | Ren, Long,Pan, Zhibin,Cao, Jianzhong,&Liao, Jiawen.(2021).Infrared and visible image fusion based on variational auto-encoder and infrared feature compensation.Infrared Physics and Technology,117. |
MLA | Ren, Long,et al."Infrared and visible image fusion based on variational auto-encoder and infrared feature compensation".Infrared Physics and Technology 117(2021). |
入库方式: OAI收割
来源:西安光学精密机械研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。